http://47.88.85.238/index.php/soic/issue/feedStatistics, Optimization & Information Computing2026-03-16T19:16:37+08:00David G. Yudavid.iapress@gmail.comOpen Journal Systems<p><em><strong>Statistics, Optimization and Information Computing</strong></em> (SOIC) is an international refereed journal dedicated to the latest advancement of statistics, optimization and applications in information sciences. Topics of interest are (but not limited to): </p> <p>Statistical theory and applications</p> <ul> <li class="show">Statistical computing, Simulation and Monte Carlo methods, Bootstrap, Resampling methods, Spatial Statistics, Survival Analysis, Nonparametric and semiparametric methods, Asymptotics, Bayesian inference and Bayesian optimization</li> <li class="show">Stochastic processes, Probability, Statistics and applications</li> <li class="show">Statistical methods and modeling in life sciences including biomedical sciences, environmental sciences and agriculture</li> <li class="show">Decision Theory, Time series analysis, High-dimensional multivariate integrals, statistical analysis in market, business, finance, insurance, economic and social science, etc</li> </ul> <p> Optimization methods and applications</p> <ul> <li class="show">Linear and nonlinear optimization</li> <li class="show">Stochastic optimization, Statistical optimization and Markov-chain etc.</li> <li class="show">Game theory, Network optimization and combinatorial optimization</li> <li class="show">Variational analysis, Convex optimization and nonsmooth optimization</li> <li class="show">Global optimization and semidefinite programming </li> <li class="show">Complementarity problems and variational inequalities</li> <li class="show"><span lang="EN-US">Optimal control: theory and applications</span></li> <li class="show">Operations research, Optimization and applications in management science and engineering</li> </ul> <p>Information computing and machine intelligence</p> <ul> <li class="show">Machine learning, Statistical learning, Deep learning</li> <li class="show">Artificial intelligence, Intelligence computation, Intelligent control and optimization</li> <li class="show">Data mining, Data analysis, Cluster computing, Classification</li> <li class="show">Pattern recognition, Computer vision</li> <li class="show">Compressive sensing and sparse reconstruction</li> <li class="show">Signal and image processing, Medical imaging and analysis, Inverse problem and imaging sciences</li> <li class="show">Genetic algorithm, Natural language processing, Expert systems, Robotics, Information retrieval and computing</li> <li class="show">Numerical analysis and algorithms with applications in computer science and engineering</li> </ul>http://47.88.85.238/index.php/soic/article/view/2733Quantile-Based Extropy Measure for Record Statistics2026-03-16T02:19:31+08:00Salook Sharmavikas_iitr82@yahoo.co.inVikas Kumarvikas_iitr82@yahoo.co.in<p>Compared to their distribution function-based extropy measure (Lad et al. 2015), the quantile-based extropy measures have a few special characteristics (Krishnan et al. 2020). The present communication deals with the study of the quantile-based extropy measure for record statistics. In this context, a generalized model for which there is no cdf or pdf is examined, and several examples are provided for illustration purposes. Additionally, we examine the dynamic version of the suggested extropy measure for record statistics and provide characterization results for that. Finally, we investigate the suggested extropy measure in the F^{Y} family of distributions.</p>2026-01-13T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2843Locally D- and A-Optimal Design Framework for Poisson Regression with Square Root Link function2026-03-16T02:19:32+08:00Tofan Biswaltofankumarbiswal100@gmail.comMahesh Kumar Pandamahesh2123ster@gmail.comGurjeet Singh Waliagswalia@cuo.ac.in<p>The majority of the research articles on optimum experimental designs for generalized linear models focus on Poisson regression models with log-link function. In the generalized linear model (GLM) configuration, the information matrix depends on the unknown parameters of the model. In such a case, an experimenter must take the strategy of identifying local optimum designs i.e. first guessing the best value for the parameters and then calculating the optimal designs. In this article, we examine locally D- and A-optimal designs for a Poisson regression model using square root link function. The Equivalence theorem validates the necessary and sufficient conditions of this optimality criterion.</p>2026-01-22T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2883Dynamic Volatility and Tail Risk in BTC, BWP, and ZAR Exchange Rates: Bayesian SARMA-GARCH with Skewed Error Distributions2026-03-16T02:19:33+08:00Letlhogonolo Mosanawelmosanawe2@gmail.comKatleho Makatjanemakatjanek@ub.ac.bw<p>Exchange rate volatility presents significant risks to investors and governments, especially in developing economies and the cryptocurrency market, where unforeseen shocks may lead to considerable financial losses. Standard risk metrics frequently do not account for time-varying volatility, skewness, and fat-tailed return distributions, thereby constraining their predictive reliability. This research utilises a Bayesian SARMA–GARCH methodology with time-varying parameters to evaluate exchange rate risk for BTC/USD, BWP/USD, and ZAR/USD. Daily log returns are modelled with Asymmetric Generalised Error Distributions to address heavy-tailed and skewed characteristics. One-step-ahead forecasts of Value-at-Risk (VaR) and Expected Shortfall (ES) are produced and systematically backtested employing credible intervals, weighted continuous ranking probability scores, and dynamic quantile tests. The findings demonstrate substantial predictive accuracy, with Mean Prediction Interval Widths of 0.0518 for BTC/USD, 0.0722 for BWP/USD, and 0.0413 for ZAR/USD, and the majority of observed returns remaining within the 99\% prediction intervals. BTC/USD responds rapidly to disturbances, ZAR/USD demonstrates persistent volatility, and BWP/USD reflects extended effects. The integration of time-varying dynamics and heavy-tailed distributions enhances the reliability of Value at Risk (VaR) and Expected Shortfall (ES) forecasts, thereby facilitating improved risk management, portfolio allocation, and regulatory oversight.</p>2025-12-21T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2948 The Epanechnikov-Kumaraswamy Distribution: A Superior Model for Bounded Data with Heavy-Tailed Behavior2026-03-16T02:19:34+08:00Naser Odatnodat@jadara.edu.jo<p>For [0,1]-bounded data, we present the Epanechnikov-Kumaraswamy Distribution (EKD), a two-parameter model that performs better than more conventional options such as the Beta distribution in situations that call for a sharp probability mass concentration (e.g., reliability engineering). EKD achieves better MLE consistency (MSE → 0 faster in simulations) and lower Rényi entropy (−1.99 vs. −1.59 for Beta, ρ=2) by combining Kumaraswamy's flexibility with Epanechnikov's optimal kernel features. Its usefulness is demonstrated by real-world applications to aircraft failure data.</p>2025-11-19T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2965Solving 0 –1 knapsack problem by an improved binary monarch butterfly algorithm2026-03-16T02:19:34+08:00Ghalya Basheerghalia.tawfeek@uomosul.edu.iqLamyaa Mohammedlomuaajasem@uomosul.edu.iqZakariya Algamalzakariya.algamal@uomosul.edu.iq<p>The binary monarch butterfly optimization algorithm (BMBOA) is a meta-heuristic algorithm that has been applied widely in combinational optimization problems. Binary knapsack problem has received considerable attention in the combinational optimization. In this paper, a new time-varying transfer function is proposed to improve the exploration and exploitation capability of the BMBOA with the best solution and short computing time. Based on small, medium, and high-dimensional sizes of the knapsack problem, the computational results reveal that the proposed time-varying transfer functions obtain the best results not only by finding the best possible solutions but also by yielding short computational times. Compared to the standard transfer functions, the efficiency of the proposed time-varying transfer functions is superior, especially in the high-dimensional sizes.</p>2025-11-19T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2993Bayesian and Likelihood Inference for the SIR Model Using Skellam’s Distribution with Real Application to COVID-192026-03-16T02:19:35+08:00ABDELATI LAGZINIabdelati.lagzini@usms.maHamid El Maroufyh.elmaroufy@usms.maAbdelkrim Merbouhamerbouha@usms.maMohamed El Omarielomari.m@ucd.ac.ma<pre>In this paper, we focus on the well-known SIR epidemic model, formulated as a Markov counting process with the discrete Skellam distribution.<br> Our main objective is to estimate its key parameters, namely the infection and recovery rates. <br>We develop a Bayesian approach that relies on Markov chain Monte Carlo and data-augmentation techniques, and establish the posterior <br>distributions under suitable priors. We then compare the Bayesian estimators with maximum likelihood (ML) estimators, for which we study <br>weak consistency and asymptotic normality. Finally, the theoretical results are supported with numerical simulations and illustrated <br>through a real-world application to COVID-19 data from Morocco.</pre>2025-12-29T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3027Estimation of Reliability Based on Rayleigh Distribution2026-03-16T02:19:36+08:00Ayman Hazaymehaymanha@jadara.edu.joAnwar Batihaa.bataihah@jadar.edu.joNaser Alodatnodat@jadara.edu.joTariq Qawasmeht.qawasmeh@aau.edu.jo Ra'ft Abdelrahimrafatshaab@yahoo.comAbdelgabar Adam Hassanahassan@ju.edu.saAyser Taahataymanha@jadara.edu.joFaisal Al-Sharqifaisal.ghazi@uoanbar.edu.iq<p>The estimation of p(x>y) where x and y are two independent Rayleigh distributions is the focus of this work. The asymptotic distribution of the maximum likelihood is obtained. Both the approximate maximum likelihood estimator of R and the maximum likelihood estimator are suggested. We derive the maximum likelihood estimate of R's asymptotic distribution. The asymptotic distribution can be used to determine the R confidence interval.</p>2026-01-11T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3180Equity UCITS in Morocco: Conceptual Foundations, Financial Risk Considerations, and the Contribution of Artificial Intelligence2026-03-16T02:19:37+08:00Zineb BELLACHEzineb.bellache-etu@etu.univh2c.maHicham EL BOUANANIh.el.bouanani@gmail.com<p>We adopt a hybrid approach that integrates traditional risk assessment methods with cutting-edge artificial intelligence techniques.The motivation for comparing three distinct models—XGBoost (gradient boosting), LSTM (recurrent neural networks), and Random Forest (ensemble learning)—stems from the need to evaluate their respective abilities to capture non-linear dependencies and long-term temporal patterns, which traditional GARCH models often fail to reflect in emerging markets.</p> <p>While conventional models are often inadequate for capturing the unique characteristics of emerging markets—where non-Gaussian distributions and asymmetric returns prevail—our study seeks to address these limitations. Standard methodologies, including likelihood function-based GARCH models for volatility clustering and Value at Risk (VaR) measures, frequently fall short in accurately reflecting market behavior during crisis periods. Our research delineates three distinct phases in market evolution, which illustrate an increasing maturity in financial markets and fund management practices.</p> <p>Our findings reveal that machine learning models, particularly XGBoost, substantially outperform traditional econometric techniques in volatility forecasting, although the performance of LSTM and Random Forest models varies across different risk applications. SHAP analysis highlights lagged volatility and market index returns as primary drivers of risk predictions. Ultimately, our findings demonstrate that XGBoost provides the most robust volatility forecasts, offering significant improvements for risk management frameworks and providing a resilient decision-making tool for regulators in the Moroccan context.</p>2026-01-24T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3202A Multi-Device Randomized Response Model for Efficient Estimation of Sensitive Attributes2026-03-16T02:19:38+08:00Ahmad Aboalkhairaaboalkhair@kfu.edu.saAbdelsamiea Abdelsamiea2Taboelenien@Imamu.edu.saMohammad Zayedmaazayed@imamu.edu.saAbdullah H. Al-Nefaieaalnefaie@kfu.edu.saMohamed Ibrahimmiahmed@kfu.edu.saAhmed Elshehaweya-elshehawey@du.edu.eg<p>The Randomized Response Technique (RRT), first introduced by Warner, has become a fundamental approach for estimating sensitive characteristics while ensuring respondent anonymity. Over time, enhancements such as the two-device design by Mangat and Singh have improved both the protection of privacy and the accuracy of estimators. Building on this groundwork, the present research proposes a new RRT model that incorporates multiple randomization devices, providing greater flexibility in balancing efficiency with privacy preservation. Theoretical properties of the model are developed, and criteria for efficiency comparison are established. Numerical analyses are conducted, with special emphasis on scenarios involving three randomization devices. In addition to efficiency improvements, the use of layered randomization fosters greater respondent confidence, thereby increasing the likelihood of truthful responses. Overall, the proposed model offers a practical and reliable advancement in sensitive data collection methodologies, with promising applications in social, health, and behavioral research.</p>2026-01-20T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3255Square New XLindley Distribution:Statistical Properties, Numerical Simulations and Applications in Sciences2026-03-16T02:19:39+08:00Abdelali Ezzebsaaezzebsa@gmail.comThara Belhamrathara.belhamra@univ-annaba.dzZeghdoudi Halimhalimzeghdoudi77@gmail.com<p>In this paper, a new one-parameter lifetime distribution, called the Square New XLindley (SNXL) distribution, is proposed using a square transformation of the New XLindley (NXL) model. The motivation for introducing the SNXL model is to obtain a parsimonious distribution capable of modeling positively skewed data with an increasing failure rate, a common feature in reliability and materials strength applications, while retaining analytical tractability.</p> <p>Several statistical properties of the SNXL distribution are derived, including moments, quantile function, incomplete moments, stochastic ordering, actuarial measures, and fuzzy reliability characteristics.<br>Parameter estimation is investigated using maximum likelihood estimation (MLE), maximum product of spacings estimation (MPSE), and weighted least squares estimation (WLSE). A Monte Carlo simulation study is conducted to evaluate the finite-sample performance of these estimators in terms of bias, mean squared error, and mean relative error.</p> <p>The practical usefulness of the SNXL distribution is illustrated using real engineering and biomedical datasets and compared with several competing Lindley-type and classical lifetime models. Graphical diagnostics, formal<br>goodness-of-fit tests, and information criteria indicate that the SNXL model provides a superior or competitive fit while maintaining model simplicity. These results suggest that the SNXL distribution is a useful alternative for<br>modeling lifetime data characterized by monotone hazard rates.</p>2026-01-18T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3299A Comparative Study of Ridge Robust and Reciprocal Lasso Estimators for Semiparametric Additive Partial Linear Models Under Multicollinearity2026-03-16T02:19:40+08:00Hayder Talibhayder.raaid@uos.edu.iqNoor Abdul-Kareem Fayadhnoor.abdulkarim@uos.edu.iqSarah Sabah Akramsakram@uowasit.edu.iq<p>Estimating parameters in Semiparametric Additive Partial Linear Models (SAPLMs) accurately proves quite difficult under high-dimensional data and related explanatory variables. Multicollinearity among predictors not only increases the variance of parameter estimates but also makes statistical interpretation more difficult, especially when the number of variables exceeds the sample size. We contrast two strong estimating techniques (Ridge regression with R/W robust estimators and the Reciprocal Lasso method) to solve these problems. Our work assesses their efficacy in overcoming multicollinearity while concurrently choosing important variables. We evaluate the techniques by means of three criteria, namely: Average Absolute Deviation Error (AADE), Mean Squared Error (MSE), and coefficient of determination (R<sup>2</sup>), using actual educational data on elements influencing the academic performance of special needs students. Results show that the Reciprocal Lasso approach offers more accurate predictions and improved variable selection capacity than both Ridge robust methods regarding the practical aspect, in terms of simulation methods, it was observed that the Lasso method is preferable when the sample size is less than the number of explanatory variables, the Ridge with W robust method is preferable when there is a moderate correlation between the explanatory variables, and the Ridge with R robust method is preferable when there is a strong relationship between the explanatory variables.</p>2026-01-16T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3302Performance of the Generalized shrinkage Estimator in Zero-Inflated Bell Regression Model2026-03-16T02:19:41+08:00Dalia Abdulhadi dalia.abdulhadi@uomustansiriyah.edu.iqWadhah Ibrahimdr_wadhah_stat@uomustansiriyah.edu.iqRawaa Al-Saffarrawaaalsaffar@uomustansiriyah.edu.iqHaifa Abdhaefaa_adm@uomustansiriyah.edu.iqAhmed Salihamahdi@uowasit.edu.iq<p>The Poisson regression model is an important analytic tool that should be used in count data modeling. When the value of excess dispersion of variables, then the model is not appropriate to apply in case the value of the mean is not equal to the value of the variance of the Poisson distribution. The results are compatible with data when there is the use of Bell regression model. The number of zeros in the count data that is seen is very high. In this case, the Zero-Inflated Bell regression model is an alternative to the Bell regression model. Parameters of the Zero-Inflated Bell regression model are estimated mostly through the approach of maximum likelihood. In an extended linear model, in which the response variable is modeled by two or more explanatory variables, as in the Zero-Inflated Bell regression model, linear dependence is a threat in a real-life analysis. It reduced the maximum likelihood estimator in its effectiveness. In a bid to solve this issue, this paper explores the performance of the generalized shrinkage estimator in the zero-inflated Bell regression model. The superiority of the proposed approaches over the traditional maximum likelihood estimator is validated by results of the simulations and implementations.</p>2025-12-21T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2896Extra Dai-Liao Method in Conjugate Gradient Method for Solving Minimization Problems2026-03-16T02:19:42+08:00Waleed Abdulazeez Majeedwaleed.amkkh@gmail.comBasim Abas Hassanbasimah@uomosul.edu.iq<p>This study explores different strategies for setting parameters in optimization algorithms, focusing on refining the Dai–Liao (DL) conjugate gradient method by using a modified quasi-Newton framework. The DL version of the conjugate gradient method is known for its effectiveness in addressing large-scale unconstrained optimization challenges. Nonetheless, conventional implementations often depend on differences between successive iterates and gradient vectors, which can limit adaptability and convergence capabilities in certain circumstances. To overcome these limitations, the proposed method introduces an innovative parameter formula that utilizes the curvature condition differently by incorporating objective function values, instead of just relying on point and gradient differences. This use of function values offers more detailed insights into the optimization landscape, thereby enhancing both the stability and accuracy of the search direction. The main benefit of this modification is its augmented computational efficiency and its capacity to ensure global convergence under relatively mild and realistic conditions. Theoretical analysis, including a proof of global convergence for the new method, supports these assertions. To verify the practical effectiveness of this approach, extensive numerical experiments were carried out on various standard test problems. The results consistently show that the modified method surpasses the traditional DL conjugate gradient algorithm in terms of convergence speed and robustness, confirming the theoretical enhancements and underscoring its potential for wider use in nonlinear optimization.</p>2025-10-07T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3038Dynamic Modeling and Multi-Objective Optimization of Takaful Insurance System2026-03-16T02:19:43+08:00Yassine Ghoulamyassine.ghoulam-etu@etu.univh2c.maAbderrahman Yaakoubabderrahman.yaakoub-etu@etu.univh2c.maMohamed Elhiamohamed.elhia@gmail.com<p>This paper proposes a novel dynamical systems approach to model and optimize Takaful insurance operations, contributing to the growing body of research in Islamic finance. To the best of our knowledge, this is the first study to formalize the interactions between the three core components of Takaful—participants, claims, and the mutual fund—within a continuous-time dynamical framework. The model integrates key operational parameters such as enrollment and attrition rates, claim frequency, contribution levels, and profit-and-loss sharing mechanisms. We first establish the mathematical well-posedness of the system, proving existence, uniqueness, positivity, and boundedness of solutions, followed by a stability analysis of equilibrium points supported by numerical simulations. Building on this foundation, we formulate a multi-objective optimization problem to address the strategic goals of Takaful operators: maximizing participant retention, minimizing claim incidence, and ensuring fund stability. The problem consists in determining the optimal values for the attrition rate, claim occurrence rate, and average contribution, subject to realistic operational constraints. We solve this problem using an integrated NSGA-II and entropy weighting approach, enabling robust trade-off analysis between conflicting objectives. The proposed methodology offers practitioners a quantitative decision-support tool for enhancing membership strategies and risk management while maintaining financial sustainability in accordance with Sharia-compliant principles.</p>2026-01-08T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3094Enhanced Electricity Demand Forecasting through Metaheuristic Optimization of Model Parameters2026-03-16T02:19:44+08:00Ibtissam Lahjililahjili.ibtissam@ensam-casa.maAziz Lmakrilmakri.aziz@ensam-casa.maMustapha Hainmustapha.hain@ensam-casa.maHassan Oukhouyaoukhouya.hassan@ump.ac.ma<p>Accurate prediction of the fluctuating nature of electricity demand remains a persistent challenge, primarily due to the complexity of distribution systems. This paper provides metaheuristic optimization to enhance state-of-the-art prediction methods. We conducted a comparative study between SARIMAX, which proved to be effective for trends and seasonality as well as the impact of exogenous variables, and the GRU deep learning model, which captures complicated non-linear dependencies. Both the models were optimized with Genetic Algorithm (GA), a metaheuristic approach for efficient search in solution space. The effect of optimization was also tested by comparing the performance with and without GA. First, the results using the real dataset showed that SARIMAX was better than GRU. In the optimized version, the SARIMAX-GA model's predictive capability and understandable variance improved significantly compared to the GRU-GA model.</p>2026-01-06T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3170Optimal placement and sizing of photovoltaic generators and static reactive power compensators in distribution systems for minimizing the annual energy purchase cost using the fractional optimization algorithm2026-03-16T02:19:45+08:00Juan Sebastián Alonso Medinajsalonsom@udistrital.edu.coOscar Danilo Montoya Giraldoodmontoyag@udistrital.edu.coJuan Diego Pulgarín Riverajdpulgarinr@udistrital.edu.co<p>This paper presents an innovative algorithm, termed the Fractional Optimization Algorithm (FOA), which utilizes the properties of fractional functions to enhance the integration of photovoltaic (PV) systems and distribution static compensators (D-STATCOMs) into distribution systems (DSs). The FOA employs a discrete-continuous encoding approach to determine the optimal placement and sizing of PV and D-STATCOM devices. A master-slave optimization framework is adopted, where the FOA operates in the master stage, and the successive approximations method is used for power flow analysis in the slave stage. The algorithm's efficacy is tested on 33- and 69-bus grids, demonstrating significant cost reductions over traditional optimization approaches such as the Vortex Search Algorithm (VSA) and the Sine-Cosine Algorithm (SCA). Furthermore, the FOA achieves superior computational efficiency, underscoring its promise as a robust optimization strategy.</p>2026-01-08T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3172Boundary Optimal Control of Infinite Order Linear Elliptic Systems Under Pointwise Control Constraints2026-03-16T02:19:46+08:00Basima Abd ElHakimBasemaAbdElhakim27082@azhar.edu.egSamira El-Tamimysamiraahmed@azhar.edu.egGhada Mostafaghadaali@azhar.edu.eg<p>This paper presents a rigorous analysis of an optimal boundary control problem governed by a linear elliptic equation of infinite order subject to pointwise control constraints. Such problems arise naturally in various applications but remain insufficiently studied due to the analytical difficulties associated with infinite-order operators and control constraints. The main objective of this work is to establish the Well-posedness of the state equation and derive optimality conditions for the associated control problem. Under assumptions on the system coefficient and admissible control set, we prove the existence and uniqueness of the weak solution to the state equation. Under pointwise control constraints on the boundary, we demonstrate the existence of an optimal control using convexity and compactness arguments that are adapted to the infinite order setting. By deriving the associated adjoint system, we formulate first order necessary optimality conditions in the form of a variational inequality involving the boundary adjoint variable. Furthermore, we discuss optimality conditions under coercivity assumptions on the infinite order operator. The results presented in this paper extend several known results for finite order elliptic systems to the infinite order framework, thereby filling an important gap in the existing literature on boundary optimal control.</p>2026-01-08T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3259A Computational Time Analysis of Dhouib-Matrix-SPP versus Particle Swarm Optimization Metaheuristics for Grid-based Path Planning2026-03-16T02:19:47+08:00Souhail Dhouibsouh.dhou@gmail.comDorra Kalleldorrakallel89@gmail.comNoura BejiBeji.noura@gmail.comSaima Dhouibsaima.dhouib10@gmail.com<p>Actually, path planning is one of the most fundamental aspects of mobile robots study. The objective is to determine the shortest feasible trajectory from a starting point to a goal location while avoiding obstacles. Particle Swarm Optimization (PSO) has been widely applied to this problem. However, it is often complex, requiring careful parameter tuning and extensive computational resources, in spite of that it suffers from high computational complexity, sensitivity to parameter tuning, and local optima stagnation. To overcome these limitations, the new Dhouib-Matrix-SPP (DM-SPP) method is proposed, which is rapid, straightforward, and does not require parameter adjustment. Simulation experiments on four case studies (I-shaped, U-shaped, T-shaped and Randomly shaped) demonstrate that DM-SPP consistently outperforms the ranking Particle Swarm Optimization (rPSO) metaheuristic and the artificial potential field-based Particle Swarm Optimization (apfrPSO) metaheuristic in terms of computational time: DM-SPP is 66 time rapider than the rPSO metaheuristic and 31 time rapider than the apfrPSO metaheuristic. These findings indicate that DM-SPP is a powerful and scalable approach for mobile robot path planning.</p>2026-01-04T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3313Adaptive Multi-Objective OX Optimizer of Irrigation and Fertilization Scheduling Under Weather Uncertainty in Sustainable Agriculture2026-03-16T02:19:48+08:00Abbas Abu Daif drabbasabudaif@anu.edu.joBandar N. Hamadnehi.mohamed@anu.edu.joAbdelmoty M. Ahmeddrabbasabudaif@anu.edu.joIslam Said Fathy Mohamedi.mohamed@anu.edu.jo<p class="font-claude-response-body" style="text-align: justify; margin: 0cm -24.15pt .0001pt -1.0cm;">Modern agriculture requires integrated optimization of water and nutrient management under variable climatic conditions while balancing economic, environmental, and productivity objectives. Traditional approaches optimize these resources separately and fail to adapt to dynamic weather conditions, resulting in suboptimal resource utilization. This paper presents the OX optimizer, a novel nature-inspired algorithm for multi-objective irrigation and fertilization scheduling under weather uncertainty. Inspired by oxen's strength, endurance, and collaborative behavior, the algorithm integrates strength-based movement mechanisms, adaptive learning, and weather pattern memory. The mathematical formulation incorporates stochastic weather scenarios, dynamic soil-water and nutrient balance constraints, and multi-objective functions addressing economic, environmental, and productivity dimensions simultaneously. Extensive computational experiments demonstrate that the OX optimizer achieves 41.7% improvement in generational distance, 50% reduction in convergence iterations, and 33.3% enhancement in solution diversity compared to NSGA-II and MOPSO. The algorithm maintains 97% performance retention when adapting to weather changes, requiring only 4 iterations versus 12 for NSGA-II. Scalability analysis across farm sizes from 1-10 to 100+ hectares confirms excellent performance consistency, maintaining above 95% normalized performance while conventional approaches degrade by 15-25%. The framework simultaneously achieves 93% economic efficiency, 87% environmental impact reduction, and 90% crop productivity, providing 20 diverse Pareto-optimal management strategies. Results demonstrate that biologically-inspired optimization can provide robust, scalable solutions for sustainable agricultural resource management under climate uncertainty.</p>2026-01-22T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2657Identifying Stability Criteria for Suggested Nonlinear Model with Application2026-03-16T02:19:50+08:00Ammar Saad Abduljabbarammarsaad86@uomosul.edu.iqAnas S. Younsanass.youns@uomosul.edu.iqSalim M. AhmadSalim1082019@uomosul.edu.iq<p>This research focuses on forecasting the Arab Republic of Egypt's future population using a proposed nonlinear autoregressive (NAR) model. As the country faces significant challenges due to rapid population growth, reliable forecasting has become essential for effective resource allocation and policy formulation. To address this issue, a NAR model was constructed and trained using historical census data from 1950 to 2023. The model aims to project Arab Republic of Egypt population trends from 2024 to 2033. The forecasting results reveal a steady increase in population over the next decade. These findings confirm the effectiveness of the proposed NAR model in capturing the underlying patterns of Egypt’s population dynamics. The model offers a valuable, data-driven tool for decision-makers to anticipate future demands related to infrastructure, public services, and economic development. In summary, the study establishes the proposed nonlinear autoregressive model as a reliable method for population prediction in the Arab Republic of Egypt.</p>2025-12-22T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2942A New Result of Statistical Convergence in Neutrosophic Generalized Metric Spaces2026-03-16T02:19:51+08:00V. B. Shakilashakilavb.math@gmail.comDr. M. Jeyaramanjeya.math@gmail.com<p>In this paper, we present the concept of G- metric space and further generalized to G-metric of nth order. We define the notion of Neutrosophic Generalized Metric Spaces (NGMS) of order n and present an example to prove this concept. Some characteristics of NGMS are also presented. Additionally we define Statistical Convergent and establish some related concepts.</p>2025-10-21T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2953A STUDY ON NEUTROSOPHIC CORDIAL LABELING GRAPHS WITH ALGORITHM2026-03-16T02:19:51+08:00Ragavi Mragavibethali2017@gmail.comKannan Thardykannan@gmail.com<p>We present here the neutrosophic cordial labeling graphs, which integrate neutrosophic and cordial labeling graphs. This work explores three types of graph labeling such as fuzzy cordial labeling graphs, intuitionistic fuzzy cordial labeling graphs and neutrosophic cordial labeling graphs. We provide some functions for vertex and edge labeling under specific conditions, such that if the edge labeling is less than 0.5, the cordial labeling is 0 and 1; otherwise, indicating the integral part of that edge labeling. Furthermore, it meets the cordial labeling requirement, which states that the number of edges labeled with 0 and 1 differ at most by 1.</p>2025-11-04T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2964Minimum Covariance Distance-Based SMOTE Approaches for Zero-Inflated Datasets with High-Dimensional Heterogeneous Features in Big Data Analytics2026-03-16T02:19:52+08:00Keith R Musaramusarakeith@gmail.comEdmore Ranganairangae@unisa.ac.zaCharles Chimedzacharles.chimedza@wits.ac.zaFlorance Matarisematarise7@gmail.comSheunesu Munyira munyirask@gmail.com<p>Big data in the credit risk landscape is often characterized by zero-inflated datasets, heterogeneity, and high dimensionality. These data aberrations adversely diminish the computational efficacy of the conventional predictive classifiers. To ensure accurate and reliable predictions, it is crucial to remedy these aberrations, as they may result in bias towards the majority class, sparsity, and computational complexity. The modified Euclidean distance (MED)-based synthetic minority oversampling technique (SMOTE) approaches have been suggested in contemporary literature as countermeasures for zero-inflated datasets coupled with heterogeneity. Despite their mathematical tractability, these approaches substantially fail to effectively capture correlations and variability among features. They are also susceptible to heavy-tailed error distributed data points (outliers) and collinearity, rendering them computationally suboptimal in high-dimensional data spaces. In this study, authors present a novelty of supplanting the MED with modified Mahalanobis distance (MMD) to the variants of SMOTE, enhancing their ability to adequately capture correlations, variability, and heterogeneous features. To mitigate the intricacies posed by these multifaceted data aberrations in high-dimensional data settings, the authors propose the fast minimum regularized coefficient determinant (FMRCD) approach to estimate the parameters of the MMD measure. Therefore, this paper enhances the robustness and computational efficiency of SMOTE-based approaches, by leveraging MMD computed intrinsically to the FMRCD approach, in conjunction with classical predictive classifiers. The empirical evidence suggests that our novelty, demonstrates superior predictions and offers computational stability edge over traditional approaches. These contributions circumvent overwhelming data complexities presented by zero-inflated datasets combined with high-dimensional heterogeneity in modelling big data phenomena.</p>2026-01-12T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3001Stacking of Ensemble and Boosting Methods for Credit Risk Prediction2026-03-16T02:19:54+08:00Nizar Nornizar.nor@uit.ac.maMohammed Kaicermohammed.kaicer@uit.ac.ma<p>A significant number of loan applicants may not be able to repay their loans, which poses a risk for banks. To help banks mitigate credit risk, this work proposes four machine learning models that provide a binary classification (good or bad) of credit payers. These models include two boosting algorithms, XGBoost (Extreme Gradient Boosting) and EBM (Explainable Boosting Machines), one bagging algorithm, RF (Random Forest), and a hybrid ensemble learning approach using a Stacking method. The latter is a meta-model which learns from the output probabilities of others algorithms to determine the optimal way to combine them, ensuring a more effective prediction. In fact, our stacking model, trained on these probabilities using Logistic Regression, outperformed the three individual models across various metrics, achieving a well-balanced and improved performance for both classes.<br>We chose EBM because it has proven its performance in a many fields and, above all, its ability to provide transparent explanations.</p>2026-01-22T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3007Protecting Wireless Sensor Networks Against Sybil Attacks2026-03-16T02:19:55+08:00Omar Zenzoumomar.zenzoum@uit.ac.maAbdelali Elmounadiabdelali.elmounadi@gmail.comHatim Kharraz Aroussihatim.kharrazaroussi@uit.ac.ma<p>Wireless sensor networks (WSNs) are a central component of the Internet of Things (IoT), offering various applications including environmental monitoring, military surveillance, and smart cities. However, WSNs suffer from limited energy and computing resources, making them vulnerable to threats such as Sybil attacks, in which a malicious node generates multiple fictitious identities to exploit legitimate nodes, disrupt routing and degrade performance. In this paper, we address this vulnerability in the context of the Equitable Distribution Energy (EDE) protocol, a modified version of LEACH that introduces a transfer node (TN) responsible for transferring data between the cluster head (CH) and the base station (BS) in order to reduce the charge on CH and balance energy consumption. We propose a defense approach that combines lightweight authentication, RSSI-based detection, and trust management to detect and mitigate Sybil nodes. Simulation results demonstrate that the proposed approach increases the packet delivery ratio (PDR), significantly reduces energy waste and improves network reliability.</p>2025-12-21T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3018Advancing Volatility Forecasting in Financial Indices: Integrating GARCH Models, Multifractal Indicators, and Deep Learning2026-03-16T02:19:57+08:00Naima Ouallanaimaoualla2001@gmail.comMohammed Salah Chiadmischiadmi@emi.ac.maYoussef Lamrani Alaouilamraniyssf@gmail.com<p>Accurate volatility prediction is essential for effective investment strategies and risk awareness. Yet, the intricate and ever-changing characteristics of markets pose considerable challenges, motivating the use of hybrid frameworks by integrating heteroscedastic models, multifractal analysis, and deep learning techniques. While heteroscedastic models are simple and widely adopted, they often fail to reflect the inherent nonlinearities and multifractal properties of volatility. In contrast, LSTM, GRU, and Transformers, while capable of capturing complex structures, require well-chosen explanatory variables to deliver accurate forecasts.</p> <p>Accordingly, this study conducts a rigorous comparative investigation across the Dow Jones Islamic Market Index, the Dow Jones Global Index, and the S&P 500. We confirm the existence of multifractal scaling and evaluate the performance of deep learning models based on historical features against hybrid models integrating GARCH-type forecasts and multifractal indicators. Results demonstrate that integrating GARCH, EGARCH, and FIGARCH features significantly improves accuracy by embedding key stylized facts such as volatility clustering, asymmetry, and long memory, with statistical significance confirmed by the Diebold-Mariano test. Furthermore, findings indicate that while standalone multifractal features are insufficient, they serve as complementary inputs. Rather than proposing a single novel model, the contribution of this work lies in a systematic analysis of feature complementarity, demonstrating that guiding deep learning with econometric signals enhances predictive robustness across diverse market structures.</p>2026-01-12T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3036The Paradox of AI Adoption in Emerging Economies: A Structural Equation Analysis of Usage Intensity and SMEs Performance2026-03-16T02:19:58+08:00Shinta Rahmanishinta.rahmani@mercubuana.ac.idArief Bowo Prayoga Kasmoariefbowo@mercubuana.ac.idMuhammad Rifqim.rifqi@mercubuana.ac.idSigit Setiawansigit.setiawan@gmail.com<p>Purpose: The Jakarta Provincial Government aims for 80% of MSMEs to be digitalized by 2025; however,<br>empirical evidence on whether the intensity of Artificial Intelligence (AI) use truly enhances business performance in developing economies remains inconclusive. This study examines the relationship between AI usage intensity and business performance among MSMEs in Jakarta, while accounting for firm size variations.</p> <p>Method: A cross-sectional survey was conducted among 300 MSME owners/managers in Jakarta’s five administrative regions who had used at least one AI tool in the past 3 months. The Technology Acceptance Model (TAM) was extended with a Resource-Based View (RBV) perspective and firm size variables (micro, small, medium). Data were analyzed using CB-SEM with Maximum Likelihood estimation and FIML to handle missing data.</p> <p>Results: The model demonstrated an excellent fit (χ2/df = 1.13; CFI = 0.993; RMSEA= 0.021; SRMR = 0.018). However, AI usage intensity did not have a significant direct effect on business performance (β = 0.085; p = 0.113). Firm size had a substantial direct effect on performance (small: β = 0.446, p < 0.001; medium: β = 0.548, p < 0.001). Small firms tended to have higher AI usage intensity (β = 0.269, p < 0.001). Nevertheless, mediation analysis confirmed that AI usage did not function as a significant mechanism for improving performance among small or medium firms.</p> <p>Implications: The findings indicate the presence of adoption without impact—access to and intensity of AI use alone are insufficient; business value emerges only when complementary resources (dynamic capabilities, data governance, and human resource skills) are available. Policy programs should therefore integrate managerial training and infrastructure financing rather than merely providing technology license subsidies.</p> <p>Originality/Value: This study is among the earliest quantitative examinations in the ASEAN context exploring the relationship between AI usage intensity and performance among MSMEs, using a SEM–TAM approach that incorporates firm size as a contingency variable.</p>2025-12-21T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3051Development of the digital environment in ensuring the quality of training of specialists2026-03-16T02:19:59+08:00Olena Karamanolenak565@gmail.comVolodymyr Morozv.moroz1@outlook.comYurii Bezghynskyiy-bezghynskyi@hotmail.comViktoriia Volodavchykv_volodavchyk@outlook.comOlena Krutkookrutko1@hotmail.com<p>The purpose of the study is to identify key aspects of the development of the digital educational environment and assess its impact on the quality of training of specialists. Special attention is paid to comparing digital platforms, analysing the content of training courses, reviewing the user experience through surveys, and observing the educational process. The methodology included a comparative analysis of popular digital platforms based on the criteria of convenience, integration, functionality, and interaction capabilities. Content analysis of training courses allows assessing the quality of materials, the presence of interactive elements, and the level of content adaptation to the digital environment. The survey, conducted on the basis of the Luhansk Taras Shevchenko National University, includes 180 students and 70 teachers, and monitoring the educational process helped assess the effectiveness of digital technologies in a real educational environment. The main results of the study show that the digital environment contributes to improving the availability of educational materials, personalising training, and automating knowledge assessment. Challenges related to technical difficulties, insufficient digital competence of teachers, and a decrease in the level of live communication are identified. A comparative analysis of the platforms demonstrates that Moodle is the most flexible, while Google Classroom and Microsoft Teams provide higher usability but have limited configurability. Content analysis shows that courses that contain interactive elements (video lectures, simulations, gamification) increase the effectiveness of material assimilation. The results of the study confirm that the digital educational environment has great potential to improve the quality of training of specialists. However, it is necessary to improve the interactive capabilities of the platforms, increase the digital literacy of teachers, and ensure a balance between online and offline interaction. The practical importance of the study is the development of recommendations for educational institutions on the effective implementation of digital technologies, improving the quality of content, and adapting the educational process to modern technological conditions. The results obtained can be used to improve educational policies, develop new digital learning methods, and enhance the effectiveness of educational process management.</p>2026-01-19T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3058Predictive maintenance in Industrial Systems Using Machine Learning : A Review2026-03-16T19:16:37+08:00Oussama BENMANSOURoussama.benmansour1@outlook.comIbtissam Medarhrimedarhri@enim.ac.maMohamed Hosnihosni.mohamed1@gmail.com<p>Predictive maintenance (PdM) has been an important strategy in modern industry, especially with the use of Machine Learning (ML) techniques to enhance equipment reliability and reduce unplanned downtime. In contrast to old and traditional maintenance strategies, that were mainly relying on reactive or scheduled interventions, PdM provides a real-time defect detection and also failure prediction through a complete environment of sensor data records. Many recent studies highlight the effectiveness of ML techniques for optimizing intervention tasks. In this study, we present a systematic mapping study (SMS) of ML classification techniques in industrial contexts. A total of 166 articles in industry and manufacturing published between the year 2000 and 2024 were identified from Scopus digital Library, after a selection process. The findings emphasize an important aspect which is that the fault diagnosis subject is frequently investigated, with Random Forest (RF) being the predominant ML classifier with 64 appearances, followed by Support Vector Machine (SVM) with 55 uses. Also, recent research highlights the increasing role of Deep Learning (DL) in PdM via the use of Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTMs) with 28 and 17 appearances respectively. We have also measured the performance of the ML and DL models across the studied papers, by calculating the average performance metric for each model, thus providing a broader explanation and a clearer view on the use of each model. Although many papers did not explicitly specify the datasets used, we found that 85.2\% of the papers that have cited their dataset have used real world datasets, thus assuring practicability. As far as the metrics are concerned, Accuracy is the most dominant metric with 100 occurrences, followed by Precision with 61 uses, Recall 57 uses and F1-score 37 uses. The most used tools are Python with 107 occurrences, R with 40 and MATLAB with 20. These findings show that there is a need for publicly available datasets, as well as the development of alternative classification techniques to advance industrial AI PdM applications.</p>2026-01-24T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3072Trajectory Estimation of Automated Guided Vehicle Based on Linear Modelling Using Ensemble Kalman Filter2026-03-16T02:20:03+08:00Kresna Oktafiantokresnaoktafianto@gmail.comMiswanto Miswantomiswanto@fst.unair.ac.idCicik Alfiniyahcicik-a@fst.unair.ac.idTeguh Herlambangteguh@unusa.ac.id<p>A robot is a mechanical device that can perform physical tasks, either using human supervision and control,<br>using programs that utilize the principles of artificial intelligence. One type of robot that is widely developed today is the<br>Automated Guided Vehicle (AGV). One of the principles of artificial intelligence in AGV is, When AGV moves from one<br>place to another using path guidance located along the AGV path. The position monitoring system is the most important<br>part of the AGV. The navigation system of mobile vehicles can be built using a relating position sensor or using an absolute<br>position sensor. Some mobile vehicles in the world of robotics are already accustomed to using position estimation as their<br>navigation system. Starting with the preparation of a mathematical model of the AGV movement in the form of a non-linear<br>model, then linearization of the non-linear model is carried out with the Jacobi matrix. The linear model above is a platform<br>for carrying out the navigation and guidance system of the AGV. The main objective of this study is to maintain position<br>accuracy continuously applied trajectory estimation to AGV navigation and guidance with the trajectory estimation method,<br>namely the Ensemble Kalman Filter. The simulation results show that by generating 500 ensembles, the best accuracy level<br>is around 99.45%. Overall, from the three simulations carried out, an accuracy level of around 97.8% - 99.45% was obtained.</p>2025-12-25T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3082Analytical Expression of Nonlinear Vibration Analysis of Triple-Walled Carbon Nanotubes2026-03-16T02:20:04+08:00Dorathy Cathrine Adorathycathrine@gmail.comR. Rajadorathycathrine@gmail.comSwaminathan Rswaminathanmath@gmail.com<p>This article employs the homotopy perturbation method (HPM) to derive analytical solutions for the nonlinear vibrations of triple-walled carbon nanotubes (TWCNTs) embedded in an elastic medium. A triple-beam model is used, where the governing equations for each layer are coupled with those of adjacent layers through van der Waals interlayer forces. The study examines the amplitude-frequency response of TWCNTs under large-amplitude vibrations, analysing the effects of variations in the elastic medium's material properties as well as changes in the nanotube's geometric parameters. <em>Using the homotopy perturbation method (HPM), a nonlinear system of equations can be reformulated into an approximate analytical expression</em>. The numerical results demonstrate the rapid convergence of the derived series solutions toward the exact solution. Additionally, the analytical findings are validated against simulation results obtained using a MATLAB program, showing strong agreement between the two approaches.</p>2026-01-03T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3087Integrated system for early fire detection and evacuation based on Arduino2026-03-16T02:20:05+08:00Akyltai Burgegulova.burgegulov@outlook.comTalgat Mazakovt-mazakov@hotmail.comGulzat Ziyatbekovaziyatbekovagulzat@gmail.comSholpan Jomartovas_jomartova@outlook.comAigerim Mazakovaaigmazakova45@hotmail.com<p>In the present study, a system is developed to ensure an efficient and safe evacuation process in case of fire using modern threat detection and evacuation route optimization technologies. During the research, algorithms for analysing data from gas sensors, algorithms for optimizing evacuation routes based on graph theory, and software for integration with an Arduino microcontroller and real-time processing of the obtained data were developed and used. The research resulted in the development of an intelligent fire safety system based on Arduino microcontroller and MQ series gas sensors for early fire detection. The application of graph algorithms allowed determining the optimal evacuation paths, taking into account the building parameters and distribution of people. The system has shown high efficiency in calculating optimal evacuation routes, minimizing risks in emergency situations. It was revealed that the system's integration with mobile applications and other smart city components has the potential to expand functionality and improve emergency coordination.</p>2026-01-20T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3089Applied the Fuzzy Naïve Bayes Algorithm to Word Sense Disambiguating2026-03-16T02:20:06+08:00Bouchra DAOUDIbouchra.daoudi@usmba.ac.maHassania HAMZAOUIhassania.hamzaoui@usmba.ac.ma<p>In this article, the Fuzzy Naïve Bayes algorithm is presented. This algorithm integrates the classical Naïve Bayes model with fuzzy logic, in order to address the complex problem of semantic disambiguation in Arabic. This task remains particularly challenging due to the morphological richness and lexical ambiguity of the Arabic language.<br>The approach adopted aims to model the uncertainty linked to the multiple possible interpretations of a word in context by assigning each meaning a fuzzy degree of membership rather than a strict classification.<br>The evaluation of the algorithm was conducted on three distinct corpora, utilising lexical and syntactic features. The performance obtained was systematically compared with that of the standard Naïve Bayes model.<br>The experimental results demonstrate a substantial enhancement in terms of accuracy and robustness, underscoring the contribution of fuzzy logic to the management of semantic uncertainties.</p>2026-01-17T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3116Enhancing the Accuracy of Standard Normal Distribution Approximations2026-03-16T02:20:07+08:00Omar Eidousomarm@yu.edu.joLoai Alzoubiloai67@aabu.edu.joAhmad Hanandehhanandeh@iu.edu.sa<p>In this paper, we introduce a novel approximation for the standard normal distribution function, significantly improving its accuracy. Using the maximum absolute error (Max-AE) and mean absolute error (MAE) as metrics, our approximation achieves a Max-AE of 2.95 × 10−5, outperforming most existing methods. Additionally, we present an approximation for the inverse normal distribution, showing its superiority over many current models. Numerical comparisons validate the efficiency of our methods, making them applicable in fields like statistical analysis, machine learning, and financial modeling.</p>2026-01-24T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3156A Decision Making Framework Using Parameterized Interval Valued Fuzzy Soft Expert Sets2026-03-16T02:20:08+08:00Anwar Bataihaha.bataihah@jadara.edu.jo<p>The notion of a soft expert set, which allows a user to access the opinions of all experts in a single model and apply it to decision-making situations, was established by Alkhazaleh and Salleh in 2011. In addition, they presented the idea of the fuzzy soft expert set, which combines the concepts of the fuzzy and soft expert sets. By merging the interval-valued fuzzy set and soft set models, Yang et al. introduced the idea of an interval-valued fuzzy soft set in 2009. This study aims to integrate the research of Alkhazaleh and Salleh (2011) and Yang et al. (2009), resulting in the development of a novel idea: the parametrized interval-valued fuzzy soft expert set (PIVFSES). Furthermore, we analyze the features of its operations complement, union intersection, AND, and OR and introduce them. A decision-making problem is analyzed using the parametrized interval-valued fuzzy soft expert set. Additionally, our approach will be more effective and valuable as it allows the user to know the opinions of all the specialists in one place. We provide a final application of this idea to decision-making situations.</p>2026-01-08T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3163Extended Sehgal-Guseman Contractions in Generalized Metric Spaces with Applications to Fractional and Elastic Systems2026-03-16T02:20:09+08:00Haitham Qawaqnehh.alqawaqneh@zuj.edu.jo<p>This paper introduces and analyzes a novel class of Sehgal--Guseman-type contractions in the framework of extended $b$-metric spaces. By incorporating functional parameters that depend on iterates of the mapping, we establish generalized fixed-point theorems that significantly extend classical results. The proposed contraction conditions offer enhanced flexibility and applicability, particularly in nonlinear analysis. We demonstrate the practical relevance of our theoretical findings through applications to nonlinear fractional differential equations and boundary value problems, supported by numerical examples and comparative analysis. Our results contribute to the advancement of fixed-point theory in generalized metric settings and open new avenues for solving complex functional equations.</p>2026-01-12T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3187On the Construction of Schauder Bases in Hilbert Spaces via Unitary Representations2026-03-16T02:20:10+08:00Andrés Felipe Cameloafkamelo@utp.edu.coCarlos Alberto Ramírezcaramirez@utp.edu.coGuillermo Villa Martínezgvilla@utp.edu.co<p>This article develops a unified framework for constructing Schauder bases in Hilbert spaces from unitary representations of locally compact groups, with emphasis on the affine action (the ax+b group) and its wavelet realization. We begin with the contrast between Hamel bases (algebraic existence) and Schauder bases (topological reconstruction), and show how topology—via continuity of coordinate functionals and convergence in norm—guides the validity of expansions useful in functional analysis.</p> <p>At an abstract level, we review Haar measure, regular representations, and the notion of a cyclic vector, and we state Schauder-type criteria for systems generated by orbits \(\{\pi(g)f\}_{g\in G}\). For the affine group, we recall the continuous wavelet transform, admissibility, and the reproduction formula; we then discretize on a dyadic lattice to obtain orthonormal (hence Schauder) systems in \(L^2(\mathbb{R})\) via multiresolution and quadrature mirror filter (QMF) conditions. The Haar wavelet appears as a prototypical case: its discrete orbit under dilations and translations generates a complete orthonormal basis.</p> <p>On the computational side, we implement simulations comparing Haar approximations with Fourier series on \([-3,3]\). We consider three representative functions: \(t^2\) (nonperiodic), rectangular wave with \(T=1\), and triangular wave with \(T=1\). We show that, for periodic functions, the Fourier series must be computed with the natural period (an indispensable correction), and that Haar offers localization advantages and robustness near discontinuities (mitigating Gibbs phenomena). For nonperiodic functions, the implicit periodization in Fourier introduces global artifacts that Haar partially avoids.</p> <p>We conclude by pointing to two directions for extension: (i) more regular wavelets (Daubechies, Riesz bases) and extensions to Banach spaces via coorbit theory and its discretization; and (ii) more general group actions (e.g., anisotropic semidirect products) tailored to specific geometries. The results strengthen the bridge between algebraic generation by group actions and stable reconstruction in functional analysis.</p>2026-01-08T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3258Linear Diophantine HyperFuzzy Set and SuperHyperFuzzy Set2026-03-16T02:20:11+08:00Takaaki FujitaTakaaki.fujita060@gmail.comAhmed Heilatahmed_heilat@yahoo.comRaed Hatamlehraed@jadara.edu.joArkan Ghaibarkan.ghaib@stu.edu.iq<p>Uncertainty modeling underpins decision-making across diverse domains, and numerous frameworks—such as Fuzzy Sets, Rough Sets, Hesitant Fuzzy Sets, and Plithogenic Sets—have been developed to capture different facets of imprecision. Hyperfuzzy Sets and their recursive generalization, SuperHyperfuzzy Sets, assign set-valued membership degrees at multiple hierarchical levels to represent uncertainty more richly. The Linear Diophantine Fuzzy Set further refines this approach by imposing weighted linear Diophantine constraints on membership and non-membership grades. In this paper, we define two new constructs—the Linear Diophantine Hyperfuzzy Set and the Linear Diophantine SuperHyperfuzzy Set—by integrating Diophantine constraints with hyperfuzzy and superhyperfuzzy frameworks, and we present a concise application example. A Linear Diophantine HyperFuzzy Set assigns each element set-valued membership and nonmembership grades, constrained by a linear Diophantine relation. A (m,n)-Linear Diophantine SuperHyperFuzzy Set assigns each element set-valued membership and nonmembership grades, constrained by a linear Diophantine relation. We also examine the algorithms associated with these notions. These extensions offer a more structured, hierarchical means of applying Linear Diophantine Fuzzy Set methodology in practical uncertain environments.</p>2026-01-18T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3280Optimized Ambiguous-Key ECC Signatures for Lightweight and Secure IoT Systems2026-03-16T02:20:12+08:00Mohamed El ouafimohamed.elouafi10@ump.ac.maKaoutar Lamrini Uahabika.lamrini@ump.ac.maAbderrahim Aslimania.slimani@ump.ac.maAbderrahim Zannoua.zannou@uae.ac.ma<p>In IoT constrained environments, traditional digital signatures struggle to acheive an effective balance<br>between security, compactness, and computational efficiency. To overcome these constraints, we propose a<br>lightweight elliptic-curve signature scheme based on a dual-component private key (x, Q1) and two auxiliary<br>commitments, inspired by the Schnorr structure. The design introduces structural ambiguity in the secret<br>key, increasing resistance to key-recovery attacks while maintaining a lightweight and fast signature process.<br>Experimental evaluation on NIST-standardized elliptic curves shows competitive performance: key generation<br>ranges from 11.6 ms to 232.1 ms, signing from 17.9 ms to 245.4 ms, and verification from 22.5 ms to 258.0 ms,<br>with energy consumption below 2.8 μJ. The results confirm that the proposed scheme offers an effective a balanced compromise between compactness, runtime efficiency, energy usage, memory requirements, and practical security guarantees, making it suitable for distributed architectures and resource-constrained IoT devices.</p>2026-01-24T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3351Global stability of a class of fractional partial differential equations describing the dynamics of viral infection with therapy and adaptive immunity2026-03-16T02:20:14+08:00Mohammad Eloualymoha2000eloualy@gmail.comAbdelaziz El Hassaniabdelazizelhasani@gmail.comKhalid Hattafk.hattaf@yahoo.frAbdelhafid Bassouabdelhafid.bassou@etu.univh2c.ma<p><span class="fontstyle0">In this article, we formulate a mathematical model based on fractional partial differential equations (FPDEs) to describe the spatiotemporal progression of viral infections, incorporating the effects of adaptive immunity and antiviral treatment. The model includes a regional fractional Laplace operator to account for the anomalous diffusion observed within the infected medium. We investigate the existence and uniqueness of equilibria and establish their global stability using Lyapunov functions tailored to the associated reaction systems. Moreover, numerical simulations are presented to illustrate the analytical results.</span></p>2026-01-08T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2906Attention-Guided Graph Neural Networks with Adaptive Feature Selection for Explainable Software Defect Prediction2026-03-16T02:20:15+08:00Haneen Shehadehhaneenshehadeh1993@yahoo.comNjood Anwer Aljarrahnjood991.aljarrah@gmail.comRazan Ali ObeidatRazan.obeidat95@gmail.comAshraf A. Abu-Einashraf.abuain@bau.edu.joMohammed Tawfikkmkhol01@gmail.com<p>Software defect prediction plays a critical role in quality assurance, yet existing approaches face significant limitations in capturing complex inter-module dependencies while providing interpretable predictions essential for practical deployment. Traditional machine learning methods rely on handcrafted features that fail to model structural relationships within software systems, while recent deep learning approaches lack the explainability required for industrial adoption. This paper proposes an attention-guided graph neural network framework that integrates multi-algorithm feature selection with graph-based structural modeling to achieve superior defect prediction performance while maintaining comprehensive interpretability.Our framework combines five complementary feature selection methods (SHAP importance, permutation importance, CMA-ES optimization, Boruta selection, and mRMR analysis) to identify the most predictive software metrics, constructs similarity-based graphs to capture inter-module relationships, and employs multi-head Graph Attention Networks (GATv2) to learn defect patterns through attention mechanisms. The approach incorporates multi-modal explainability through attention weight visualization, LIME attributions, and feature importance analysis to provide actionable insights for software practitioners.Comprehensive evaluation on NASA PROMISE and GHPR datasets demonstrates substantial performance improvements, achieving mean F1-scores of 95.52% and 91.6% respectively, representing gains of 2.07% to 6.62% over state-of-the-art methods including CodeBERT, standard GAT, and traditional machine learning approaches. Ablation studies confirm that graph construction contributes most significantly to performance improvements (+3.55% F1), while feature importance analysis reveals that static invocations dominate modern defect patterns, providing specific architectural guidance for code quality improvement.The framework maintains computational efficiency suitable for continuous integration pipelines while scaling effectively from small projects to enterprise systems. Our contributions advance both theoretical understanding of software defect patterns through attention mechanism analysis and practical capabilities for industrial defect prediction through comprehensive explainability integration.</p>2025-12-11T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3128Anomaly detection in endpoint security: Leveraging baseline deviation techniques for enhanced protection2026-03-16T02:20:17+08:00Kamran Asgarovasgarovkamran2@gmail.com<p>The aim of the study was to develop approaches for utilising deviations from a baseline to identify anomalies in endpoint protection, with the goal of enhancing threat detection efficiency. The work involved an analysis of anomaly detection in endpoint protection and the development of baseline deviation approaches to improve security. The research results included the creation of baseline programs for analysing network traffic using Z-scores, as well as for identifying correlated events based on timestamps and values, which enabled the detection of anomalous activities. Process schemas for classifying anomalous events and responding to them using machine learning (ML) methods were demonstrated. Furthermore, approaches such as dynamic baseline updating, multivariate deviation analysis, temporal contextual models, integration with event correlation analysis, and risk-based deviation ranking systems were developed. Dynamic baseline updating allowed for real-time adaptation to system behaviour changes, multivariate analysis revealed complex relationships between parameters, and temporal contextual models accounted for cyclical patterns and trends in the data. On the other hand, integration with event correlation analysis uncovered interdependencies between different types of activity, while risk-based deviation ranking systems prioritised detected anomalies, enabling faster responses to the most critical threats. The results also included an analysis of the advantages, limitations, and application examples of each approach, covering areas such as virtual private networks, supervisory control and data acquisition (SCADA) systems, and Internet of Things (IoT) devices. The findings confirm that the proposed approaches reduce false positives, improve anomaly detection accuracy, and enhance the resilience of cybersecurity systems.</p>2026-03-13T11:22:39+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3346Adult Dengue Fever in Bangladesh: A One-Sample Rank-Based Test of Hematologic Location Against Healthy References2026-03-16T02:20:18+08:00Nabila A. Alsharifnabila_alsharif@coadec.uobaghdad.edu.iqInaam Aboud HussainInaam.aboud@coadec.uobaghdad.edu.iqEthar Hussain Jawadethar.h@colaw.uobaghdad.edu.iq<p>Dengue fever is a mosquito-borne viral infection that produces characteristic abnormalities in routine blood tests, yet these hematologic changes are typically analysed separately for each parameter rather than as a combined multivariate profile. This study investigated whether the joint hematologic profile of adult dengue patients in Bangladesh is systematically displaced from healthy adult reference values. We analysed a cohort of laboratory-confirmed adult dengue cases from a Bangladeshi hospital and focused on four core hematologic indices: haemoglobin, white blood cell count, platelet count, and platelet distribution width (PDW). External adult reference means were used to define a healthy location vector, and robust multivariate inference was carried out using the rank-based location test of Utts and Hettmansperger (1980). Sex-specific (male, female) and pooled (all adults) analyses were performed after careful data cleaning, outlier diagnostics, and checks of non-normality. Across all sex-specific and pooled analyses, the same multivariate profile emerged: haemoglobin, white-cell, and platelet levels were consistently lower than their healthy reference means, whereas PDW was higher, indicating greater platelet-size variability. The Utts–Hettmansperger test strongly rejected the null hypothesis of equality with the healthy reference vector in every analysis, documenting a large and coherent displacement of the dengue group in the four-dimensional hematologic space. Taken together, these results provide robust, distribution-free statistical evidence that adult dengue fever in Bangladesh is associated with a stable, biologically interpretable shift in core blood indices, integrating leukopenia, thrombocytopenia, and altered platelet morphology into a single multivariate summary. This study demonstrates that robust rank-based multivariate location tests can enhance traditional laboratory interpretation by quantifying the joint displacement of key blood indices in infectious-disease cohorts such as adult dengue.</p>2026-03-08T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3243A Temporal Adaptive Fuzzy Clustering Framework for Dynamic Behavioral Segmentation in E-Commerce2026-03-16T02:20:20+08:00Omar El Aalouchemanal.loukili@usmba.ac.maFaycal Messaoudifaycal.messaoudi@usmba.ac.maManal Loukilimanal.loukili@usmba.ac.maRiad Loukiliriad.loukili@usmba.ac.ma<p>Large volumes of behavioral data are generated by e-commerce platforms through customer browsing patterns, transaction histories, and product interactions. However, the complexity, noise, and temporal evolution of such behaviors are not adequately captured by traditional clustering methods. To address these limitations, <br>an Adaptive Fuzzy Clustering for Behavioral Segmentation (AFCBS) framework is proposed. In this framework, temporal adaptation, robust preprocessing, and outlier handling are integrated to model evolving and overlapping behavioral patterns. At its core, a Fuzzy C-Means with Temporal Adaptation (FCM-TA) algorithm <br>is introduced, in which temporal weighting is incorporated into the objective function so that dynamic and valid fuzzy memberships are maintained. The framework is evaluated on the UCI Online Retail dataset, where 488,000 cleaned transactions from 4,300 customers are analyzed. Comparative experiments are conducted against K-means, Gaussian Mixture Models, classical FCM, and hierarchical fuzzy clustering. Superior segmentation performance is achieved by AFCBS, as reflected in higher cohesion and separation (Silhouette = 0.46), lower fuzziness (Partition Entropy = 0.73), and stronger temporal consistency (TSI = 0.82). A simulated marketing scenario further indicates that a 19.4% increase in conversion rates can be obtained when AFCBS-based segmentation is used.</p>2026-03-09T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3344The Impact of Wavelet-Based Denoising on Beta Regression Model Fit2026-03-16T02:20:21+08:00Husam Waleed Yaseenhusam.waleed@uomosul.edu.iqTaha Hussein Alitaha.ali@su.edu.krdSaif Ramzi Ahmedsaiframz525@gmail.comHeyam Hayawihe.hayawi@uomosul.edu.iq<p>This paper discusses the relevance of wavelet-based denoising in a coupled beta regression for the analysis of the continuous, bounded response variables sensitive to noise. This hybrid approach was performed against both simulation experiments and real-world industrial data. The simulation phase generated data varying sample sizes, precision parameters, and noise levels to investigate the effects of pre-processing the response variable with discrete wavelet transforms – Daubechies, Symlets, and Coiflets – on model fitness, accuracy, and robustness.These wavelets were selected due to their complementary mathematical properties, which offer different equilibria for time-frequency localization, symmetry, and smoothness, and are suitable for denoising bounded response variables before modeling with beta regression. These wavelets were selected due to their complementary mathematical properties, which offer different equilibria for time-frequency localization, symmetry, and smoothness, and are suitable for denoising bounded response variables before modeling with beta regression. The simulation results indicated that wavelet-denoised models consistently outperform the conventional beta regression in noisy conditions. Daubechies and Symlets performed better in simulations overall. For the real data analysis, using 32 observations from a process of production of gasoline, wavelet-based denoising improved model fit, prediction precision, and residual behavior. In this case, the Coiflets wavelet performed better, providing the highest log-likelihood and precision estimates and lowest AIC, BIC, and MSE values. Residual testing confirms better symmetry and reduced variability in wavelet-enhanced models. Wavelet preprocessing is a useful and successful improvement over beta regression for industrial and process data that contain little noise and occasional outliers.</p>2026-02-24T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3212Causal Inference in Econometrics Using Machine Learning: Estimating the Effect of AI and Automation Adoption on Firm Productivity in Europe2026-03-16T02:20:23+08:00Md. Shahidul Islamsshahid01921@gmail.comMd Ashikquer Rahmanabdul019131@gmail.comAkbar Ali Hossainnadim016280@gmail.com<p>Artificial intelligence and automation are becoming central to how European firms work, compete, and organize their production. But despite the rapid growth of these technologies, there is still a key question: does adopting AI genuinely make firms more productive, or are already-productive firms simply more inclined to adopt it? This study addresses that question by combining established econometric methods with newer causal machine learning techniques. Using a large panel of European firms from 2010 to 2023, built from Orbis and EU KLEMS, AI adoption is identified through both investment measures and text based disclosure indicators. Across multiple empirical approaches including fixed effects, difference in differences, and instrumental variable models the results consistently show productivity gains of roughly 3% to 6% among AI adopting firms. Double Machine Learning produces a similarly robust estimate of around 4.5%. Event study evidence further indicates no pre adoption improvements, with productivity gains emerging gradually afterward. The effects, however, are uneven. Larger firms, those with more advanced digital systems, and firms employing a higher share of skilled workers benefit noticeably more from AI adoption. In contrast, firms lacking strong digital foundations or sufficient human capital see smaller gains. The results indicate that adopting AI does boost firm productivity, but the size of the benefit depends heavily on the presence of complementary skills and digital infrastructure.</p>2026-03-04T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3209Using The Biorthogonal Wavelet to Address the Problem Heterogeneity of Variance2026-03-16T02:20:25+08:00Wisam Wadullah Saleemwisam-stat@uomosul.edu.iq<p>Thirty-two experimental units were inductive in this study based on a totally randomized design to provide impartial distribution of the treatments. The following analysis discussed the problem of heterogeneity in the variances by means of using biorthogonal wavelets together with the discrete wavelet transform (DWT), which gave a more robust model of the underlying data structure. All the moments that fade off the orthogonal wavelet family have been used in addition to the universal threshold (UT) value and the hard rule of thresholding. Comparative evaluations were performed across multiple performance criteria. The results revealed that the biorthogonal wavelet transform exhibited superior performance in stabilizing variance and improving analytical accuracy, demonstrating its potential as an effective tool for variance heterogeneity correction in experimental data analysis.</p>2026-02-05T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3186 Integrated AHP-TOPSIS Model for Multi-Level Prioritization of Drug Abuse Risk Indicators: Evidence from Bengkulu Province, Indonesia2026-03-16T02:20:25+08:00Rewan Jayadijayadirewan6@gmail.comHerry Suprajitnoherry-s@fst.unair.ac.idMiswantomiswanto@fst.unair.ac<p>This study applies an integrated Analytic Hierarchy Process (AHP) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) model to evaluate multilevel risk factors and regional vulnerability to drug abuse in Bengkulu Province, Indonesia. Expert assessment data were formally obtained from the National Narcotics Board (BNN) of Bengkulu Province under procedures approved by the institutional research ethics committee. Consistency testing confirmed that all pairwise comparison matrices had Consistency Ratio values below 0.1, ensuring logical reliability of the data. The AHP analysis generated weighted risk factors that served as input for the TOPSIS framework. The integrated results identified Rejang Lebong Regency as the most vulnerable area, followed by Bengkulu City, Mukomuko, Lebong, Kepahiang, North Bengkulu, Seluma, South Bengkulu, Kaur, and Central Bengkulu. This integrated model provides an evidence-based decision-making framework to prioritize preventive actions and resource allocation for effective drug control policy and public health risk management.</p>2026-01-27T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3182Bootstrap Liu estimator for Almon Distributed Lag Model2026-03-16T02:20:27+08:00Husam Yaseenhusam.waleed@uomosul.edu.iqSalwa Qasimsalwa.kasim@uomosul.edu.iqZakariya Algamalzakariya.algamal@uomosul.edu.iq<p>The Almon distributed lag model is used to study how an explanatory variable affects a dependent variable spread out over a number of time periods, as opposed to an influence that happens instantly. Most of the time, Almon technique is used to estimate the parameters in the distributed lag model (DLM). Still, this estimator becomes very unstable if the explanatory variables and their delays are highly correlated. A new bootstrapped Liu shrinkage estimator is suggested in this research to deal with multicollinear challenges in the DLM. It was achieved by gradually narrowing the selection of the biasing parameters. According to the findings of the Monte Carlo study, the new methods lead to lesser MSE in all the cases compared to the standard methods. The use of the tested methods in real-world situations supports the assumption that they are better than the other ones.</p>2026-01-25T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2842Framing Adversarial Machine Learning and Federated Learning Threats through MITRE ATLAS2026-03-16T02:20:28+08:00Tarik GUEMMAHtarik.guemmah@usmba.ac.maHakim EL FADILIhakim.elfadili@usmba.ac.ma<p>As the adoption of Federated Learning (FL) accelerates across sectors prioritizing privacy, its decentralized architecture introduces novel cybersecurity threats that remain underrepresented in existing adversarial threat taxonomies. This paper bridges this gap by systematically analyzing FL-specific adversarial techniques and mapping them to the MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) framework, a living knowledge base for Artificial Intelligence Systems threats. Through a structured methodology and systematic literature review of 126 peer-reviewed articles published between 2021-2025, complemented by empirical validation through detailed case studies, we were able to find out that federated learning is vulnerable to several critical vulnerabilities, such as model poisoning, privacy leakage, and collusion attacks both in cross-silo and cross-device settings. The in-depth analysis of the current coverage in MITRE ATLAS reveals considerable weaknesses in its coverage and the mitigation measures are critically analyzed in the light of computational overhead, scalability concerns, and regulatory compliance issues. This contribution proposes extensions to the MITRE ATLAS framework, enables AI threat intelligence operationalization and provides a systematized roadmap of standardization of federated learning threat modeling in the ATLAS framework.</p>2026-01-27T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3226Assessing LSTM and GRU for Multi-Dataset Intrusion Detection in IoT Environments2026-03-16T02:20:29+08:00Walid Alayashwalid@esraa.edu.iqMaha Rahrouh maha.rahrouh@aau.ac.aeAmer Abbas Ibrahim amer.abbas@esraa.edu.iqMarwa Hussien Mohamed eng_maroo1@yahoo.comSaja Theab Ahmed saja.theab@esraa.edu.iqMazen Hamed Albarri mazenb51@gmail.comMohammed Hasan Ahmed comt1.21.191@esraa.edu.iq<p class="MDPI18keywords"><span style="layout-grid-mode: both;">The rapid expansion of the Internet of Things (IoT) has transformed modern connectivity, allowing seamless communication and data exchange between devices and systems. Nevertheless, with this increased interconnectivity comes significant cybersecurity problems, subjecting IoT infrastructures to diverse and complex cyber attacks. Therefore, developing smart intrusion detection mechanisms is now essential to ensure the integrity of data, privacy, and network trustworthiness in IoT settings.The deep learning models Gated Recurrent Unit (GRU) and Long Short-Term Memory (LSTM) are evaluated in this study with respect to their capacity to detect IoT cyberattacks. Three new datasets—NF-ToN-IoT, UNSW_NB15, and BoT-IoT—were employed with the preprocessing of missing value treatment, encoding categorical variables, normalization of features, and a 70/30 train-test data split. The outcome reveals excellent performance in both models with 100% accuracy on the BoT-IoT dataset. For UNSW_NB15 data, the accuracy of GRU was 97% compared to LSTM's 96%, while LSTM (83%) was slightly better than GRU (81%) for NF-ToN-IoT. These outcomes signify the stronger ability of recurrent models to handle the complexity of IoT data and strengthen the argument that model selection is based on specific features of datasets. It should be explored in future research using hybrid and transformer-based architectures to enhance emerging threat detection. Further to this, this work offers immense educational importance by presenting a practical guide towards educating students on applying deep models such as LSTM and GRU to secure IoT systems using empirical evidence and experiments in the laboratory.</span></p>2026-03-12T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3362Cost-Aware Deep Neural Network for Credit Card Fraud Detection under Chronological Evaluation2026-03-16T02:20:30+08:00ABDELLATIF ELBADRAOUIabdellatif.elbadraoui@uit.ac.maYassine MOUHSSINE abdellatif.elbadraoui@uit.ac.maAbdelkader El Alaouiabdellatif.elbadraoui@uit.ac.maSaid Ouatik El Alaouiabdellatif.elbadraoui@uit.ac.ma<p>Credit card fraud detection remains challenging due to extreme class imbalance, evolving fraud patterns, and asymmetric misclassification costs. This paper presents a deployment-oriented evaluation and decision-calibration framework for transaction-level fraud detection on the public creditcard.csv dataset, assessed under a strictly chronological train–validation–test protocol that mirrors real-world operation. Our contribution lies in evaluation design and cost-aware decision calibration rather than architectural novelty, focusing on how probabilistic model outputs are translated into operational decisions under explicit cost constraints. A standard feed-forward deep neural network (DNN) is trained on the numerical features using a class-weighted binary cross-entropy loss, with early stopping guided by validation AUC–PR. At deployment time, the decision threshold is selected on the validation window by minimizing an empirical cost function that penalizes false negatives more than false positives. On the held-out test set, the proposed pipeline achieves a ROC-AUC of 0.9489 and a PR-AUC of 0.7813. We show that decision policy choice strongly affects operational outcomes: naive thresholding yields excessive false alarms, whereas validation-based cost-sensitive calibration substantially reduces expected loss. Under C_FN = 10 and C_FP = 1, the cost-optimal threshold yields an expected test cost of 190. Comparisons with logistic regression, random forest, and XGBoost under identical preprocessing, temporal splitting, and decision calibration show that tree-based ensembles remain highly competitive, while the evaluated DNN achieves comparable cost and precision–recall performance. Overall, the results highlight the importance of combining chronological evaluation with explicit cost-sensitive thresholding for practical fraud detection under severe class imbalance.</p>2026-03-08T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2859Modeling the Structural and Cyclical Determinants of Morocco's Trade Balance: An Econometric Approach2026-03-16T02:20:32+08:00Khalil Bourouisbourouis.khalil@gmail.comMohamed El Yahyaouimed.elyahyaoui19@gmail.comHassan Oukhouyaoukhouya.hassan@ump.ac.maAbdellali Fadlallaha.fadlallah@insea.ac.maSaida Amineaminesaida52@gmail.com<p>This study examines the dynamics of Morocco's trade balance, with a focus on the distinction between its structural and cyclical components. Firstly, a cyclical adjustment approach is used to neutralize the effects of internal and external economic fluctuations. The rationale behind this method is based on estimating the relationship between trade volumes (exports and imports) and their fundamental factors, including domestic and foreign income and prices. Secondly, a modified version of the Hodrick-Prescott filter, known as the modified Hodrick-Prescott filter (MPHF), is employed to accurately extract the trend components from price indicators and gross domestic product (GDP). The second method of estimating income elasticities of foreign trade is through autoregressive distributed lag (ARDL) models, which allow for both short and long-run effects. { In order to ensure the accuracy of these findings, sensitivity analysis is conducted on different alternative methods.} Overall, there is also a more profound comprehension regarding persistent disparities in external balances, since the approach can allow for different paths for the sustainability of Morocco’s current balance. The study highlights the role of structural elements in foreign trade phenomena in Morocco, where structural differences are considered the primary cause of the country’s trade deficit. Nevertheless, import and export cycles have had minor effects.</p>2026-01-13T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3519A Lambda Lakehouse Deep Learning Framework for Gold Price Forecasting in Financial Markets 2026-03-16T02:20:33+08:00Mourad Farissm.fariss@uae.ac.maMaryam Maatallahmaryam.maatallah.d23@ump.ac.maHakima Asaidih.asaidi@ump.ac.maMohamed Belloukim.bellouki@ump.ac.ma<p>Gold remains a critical financial asset because of its dual function as both a safe-haven instrument and a key indicator of market stability. Accurate forecasting of gold prices is therefore essential for investors, policymakers, and financial institutions. This study introduces a Lambda–Lakehouse architecture integrated with deep learning models to improve the prediction accuracy of gold price time series. Historical data from 2004 to 2025 were collected, preprocessed, and managed within a cloud-based environment combining AWS S3, Apache Spark, Delta Lake, and Databricks. Three predictive models Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and Transformer were implemented and evaluated using standard metrics (RMSE, MAE, MAPE, R²). Experimental results reveal that LSTM achieved the best performance (RMSE=0.0077, MAE=0.0047, R²=0.9984), outperforming both GRU and Transformer, especially under distributional shifts when prices exceeded 2400 USD. The proposed framework demonstrates the benefit of coupling scalable big data architectures with deep sequential models for financial forecasting.</p>2026-03-10T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3153Numerical methods for evolutionary problems considering advection dominated and control2026-03-16T02:20:34+08:00Andrés Felipe Camelo Muñozafkamelo@utp.edu.coCarlos Alberto Ramírez Vanegascaramirez@utp.edu.coGuillermo Villa Martínezgvilla@utp.edu.co<p>We study a time–dependent advection–diffusion equation with spatially varying advection and heterogeneous diffusivity under homogeneous Dirichlet conditions. The strong and weak formulations are derived and discretized by a conforming Galerkin finite element method, leading to the standard semi–discrete system with mass, stiffness, and advection matrices. Temporal integration is performed with an unconditionally stable implicit Euler scheme. A practical 2D assembly procedure based on a 7–point Gaussian quadrature is detailed. To assess discretization accuracy and mesh independence, we employ $L^{1}$, $L^{2}$, and $H^{1}$ norms together with the Grid Convergence Index (GCI), including Richardson extrapolation and an asymptotic range check via the convergence ratio. Beyond baseline simulations with elementwise constant advection, we formulate and solve a convex optimization problem for an advection field $\gamma_{\mathrm{opt}}$ that minimizes a quadratic functional and steers the solution within a prescribed subdomain. Numerical experiments on structured meshes (n=9,18,36 per direction) demonstrate consistent convergence, CAR values near unity, and reduced dispersion when using $\gamma_{\mathrm{opt}}$, while quantifying uncertainty through GCI. The results confirm the robustness and effectiveness of the proposed FEM framework for evolutionary advection–diffusion problems and provide a reproducible pathway for accuracy verification and transport-field design.</p>2026-01-08T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3139A Penalized Least Squares Estimation of Fourier Series Semiparametric Regression Theory, Simulation, and Application2026-03-16T02:20:36+08:00 Ihsan Fathoni Amriihsanfathoni@unimus.ac.idNur Chamidahnur-c@fst.unair.ac.idToha Saifudintohasaifudin@fst.unair.ac.idBudi Lestaribudilestari@mipa.unej.ac.idDursun Aydinduaydin@mu.edu.trFebrian Rohimfebriannurrohim12@gmail.com<p>In regression analysis, a functional relationship between response and predictor variables sometimes follows a semiparametric regression model that is constructed by parametric and nonparametric components where its nonparametric component is a function of time which will be approximated by a Fourier Series. In this study, we develop a penalized least square smoothing technique for estimating a Fourier Series Semiparametric Regression (FSSR) model. The penalized least square is very good to use when generalized cross validation method cannot choose really good parameters due to over-fitting effect in the model is ignored. We also provide numerical example through a simulation study, and apply the proposed method to real data for predicting temperature of earth surface based on relative humidity. The results show that the estimated FSSR model results MAPE value of 1.068% . This means that the obtained model has a highly accuracy category as a prediction model</p>2026-02-20T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3123Optimal Control Problem of Mathematical Model with Fuzzy Parameter for the COVID-19 Epidemic with Symptomatic and Asymptomatic cases in Indonesia2026-03-16T02:20:37+08:00Muhammad Khariskharis.mat@mail.unnes.ac.idMiswantomiswanto@fst.unair.ac.idCicik Alfiniyahcicik-a@fst.unair.ac.id<p> In this study, we analyze the optimal control model for the COVID-19 epidemic in Indonesia, considering both symptomatic and asymptomatic cases, and considering government policies. We used three control parameters: policies to prevent the spread of the disease among vulnerable people, quarantine with treatment for symptomatic patients, and infection testing followed by isolation for asymptomatic patients. To obtain the optimal solution, the Pontryagin Maximum Principle and cost-effectiveness analysis methods were used. Based on the cost-effectiveness analysis, it was concluded that implementing the three control measures simultaneously at each temperature was significantly more cost-effective in preventing the spread of infection than when only one or two controls were implemented. Another interesting finding was the emergence of symptomatic patients again if preventive controls were reduced, while asymptomatic patients still existed.</p>2026-03-06T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2734Volatile Count Modelling of COVID-19 Mortality Data: A Zero-Inflated Overdispersed Time Series Framework2026-03-16T02:20:38+08:00Thembhani Hlayisani Chavalalathembhani.chavalala@ul.ac.zaRetius Chifurira thembhani.chavalala@ul.ac.zaKnowledge Chinhamuthembhani.chavalala@ul.ac.zaJacob Majakwarathembhani.chavalala@ul.ac.za<p>Epidemiological count time series often display challenging characteristics such as overdispersion, zero-inflation, and serial dependence. This study explores appropriate statistical frameworks for modelling such data, using daily COVID-19 mortality counts from South Africa and its three most populous provinces as a case study. The observed data exhibited strong serial autocorrelation, excess zeros, overdispersion, and time-varying volatility. To capture these dynamics, we employed hybrid models combining zero-inflated Poisson autoregressive (ZIPA) and zero-inflated negative binomial autoregressive (ZINBA) structures with a Generalized Autoregressive Conditional Heteroskedasticity (GARCH) component. Model comparisons using the Vuong test indicated that the ZINBA model offered a superior fit. Further, a GARCH model applied to the ZINBA residuals effectively accounted for residual heteroscedasticity, as validated by sign-bias testing. These results underscore the utility of integrating zero-inflated count mode.</p>2026-03-15T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2698Machine Learning Algorithms for the Prediction of the Spread of COVID-19 in Namibia2026-03-16T02:20:39+08:00Claris ShokoShokoc@ub.ac.bwEriyoti Chikodzachikodzae@ub.ac.bw<p>Improving the accuracy and stability of daily COVID-19 forecasts is crucial for effectively managing and controlling the pandemic. This chapter compares the performance of different machine-learning algorithms in predicting and forecasting the spread of COVID-19 in Namibia. Machine learning approaches that include the support vector machine (SVM), the TBATS, the generalized additive model (GAM), and the Stochastic Gradient Boosting Machine (SGBM) approach are compared. Selection of the best-performing model is done using plots of forecasts from fitted models on the test dataset since plots are visually appealing. A further selection of the best model is done using key performance indicators (KPIs), that is, root mean square error (RMSE), mean absolute percentage error (MAPE), and the coefficient of determination R<sup>2</sup>. Results show that the positive rate, reproductive rate, and stringency index contribute significantly (p-values<0.05) to the spread of COVID-19 in Namibia. From the fitted models GAM and the SVM linear kernel function are the best performers in forecasting daily COVID-19, although based on KPIs GAM outperforms the SVM linear kernel function. This study recommends the use of both models to help solve the forecasting problem and the identification of significant regressors. Accurate prediction and forecasting help in giving the health sector early warning signs and preparedness to help manage and control epidemics. This will go a long way in helping achievement of the Sustainable Development Goal number 3 of health and wellness.</p>2026-02-23T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/3111Exploring the Dynamics of Malaria in East Nusa Tenggara, Indonesia: Impact of Relapse, Treatment and Vaccination2026-03-16T02:20:40+08:00Chidozie Williams Chukwucchukwu@georgiasouthern.edu Josiah Mushanyujmushanyu@unam.naS Tchoumisytchoumi83@gmail.com<p>Malaria infection continues to affect numerous countries worldwide, persisting as a public health issue despite recent progress in control measures. Particularly in regions like Africa and the Middle East, malaria remains a significant concern. We formulate two mathematical models to evaluate how vaccination and treatment efforts contribute to combating malaria. Parameter estimation and model validation are performed using the dataset for malaria incidence from Lembata Regency, East Nusa Tenggara Province, Indonesia. The first model is motivated by the increasing demand for a malaria vaccine. Our study results suggest that such a vaccine could reduce the global prevalence of malaria. The second model includes two types of treatment: radical cure and bloodstream treatments. The model reproduction numbers and equilibrium points for both models are established. A global sensitivity analysis is conducted to identify the parameters that significantly impact the model's reproduction number. Numerical analysis is carried out to support theoretical findings. The extended model results give the necessary malaria control thresholds to lower the $\mathcal{R}^t_c$ value to fully eradicate malaria in Lembata Regency, East Nusa Tenggara Province, Indonesia. Both models demonstrate the vital importance of vaccination and treatment in combating malaria infection.</p>2026-03-15T00:00:00+08:00Copyright (c) 2026 Statistics, Optimization & Information Computing