Statistics, Optimization & Information Computing
http://47.88.85.238/index.php/soic
<p><em><strong>Statistics, Optimization and Information Computing</strong></em> (SOIC) is an international refereed journal dedicated to the latest advancement of statistics, optimization and applications in information sciences. Topics of interest are (but not limited to): </p> <p>Statistical theory and applications</p> <ul> <li class="show">Statistical computing, Simulation and Monte Carlo methods, Bootstrap, Resampling methods, Spatial Statistics, Survival Analysis, Nonparametric and semiparametric methods, Asymptotics, Bayesian inference and Bayesian optimization</li> <li class="show">Stochastic processes, Probability, Statistics and applications</li> <li class="show">Statistical methods and modeling in life sciences including biomedical sciences, environmental sciences and agriculture</li> <li class="show">Decision Theory, Time series analysis, High-dimensional multivariate integrals, statistical analysis in market, business, finance, insurance, economic and social science, etc</li> </ul> <p> Optimization methods and applications</p> <ul> <li class="show">Linear and nonlinear optimization</li> <li class="show">Stochastic optimization, Statistical optimization and Markov-chain etc.</li> <li class="show">Game theory, Network optimization and combinatorial optimization</li> <li class="show">Variational analysis, Convex optimization and nonsmooth optimization</li> <li class="show">Global optimization and semidefinite programming </li> <li class="show">Complementarity problems and variational inequalities</li> <li class="show"><span lang="EN-US">Optimal control: theory and applications</span></li> <li class="show">Operations research, Optimization and applications in management science and engineering</li> </ul> <p>Information computing and machine intelligence</p> <ul> <li class="show">Machine learning, Statistical learning, Deep learning</li> <li class="show">Artificial intelligence, Intelligence computation, Intelligent control and optimization</li> <li class="show">Data mining, Data analysis, Cluster computing, Classification</li> <li class="show">Pattern recognition, Computer vision</li> <li class="show">Compressive sensing and sparse reconstruction</li> <li class="show">Signal and image processing, Medical imaging and analysis, Inverse problem and imaging sciences</li> <li class="show">Genetic algorithm, Natural language processing, Expert systems, Robotics, Information retrieval and computing</li> <li class="show">Numerical analysis and algorithms with applications in computer science and engineering</li> </ul>International Academic Pressen-USStatistics, Optimization & Information Computing2311-004X<span>Authors who publish with this journal agree to the following terms:</span><br /><br /><ol type="a"><ol type="a"><li>Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a <a href="http://creativecommons.org/licenses/by/3.0/" target="_new">Creative Commons Attribution License</a> that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.</li><li>Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.</li><li>Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See <a href="http://opcit.eprints.org/oacitation-biblio.html" target="_new">The Effect of Open Access</a>).</li></ol></ol> Sushila-Geometric distribution, properties, and applications
http://47.88.85.238/index.php/soic/article/view/1722
<p>In the present paper, we introduce a compound form of the Sushila distribution which offers a flexible model<br>for lifetime data, the so-called Sushila-geometric $(SG)$ distribution, and is obtained by compounding Sushila and geometric distributions. A three-parameter $SG$ distribution is capable of modelling upside-down bathtub, bathtub-shaped, increasing and decreasing hazard rate functions which are widely used in engineering, economy and natural sciences. This new model contains some known distributions such as Lindley, Lindley-Geometric, and Sushila distributions in a special cases as sub-models. Several statistical properties of the $SG$ distribution are derived. Simulation studies are conducted to investigate the performance of the maximum likelihood estimators derived through the EM algorithm. The flexibility of the new model is illustrated in the application of two real data sets.</p>Sepideh DaghaghAnis IranmaneshEhsan Ormoz
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-10-032025-10-031462957297610.19139/soic-2310-5070-1722Using miss-specification effect for selection between inverse Weibull and lognormal distributions
http://47.88.85.238/index.php/soic/article/view/2343
<p>Two well-known distributions which are very helpful for modelling data in different areas, are the lognormal and inverse Weibull distributions. Choosing between the true or false distribution is substantial and of great importance. In order to determine the correct model, the ratios of biases and mean squared errors will have been computed by performing miss-specified analysis on the mean of these distributions and decision is made by comparing these ratios. To confirm the achieved theoretical results, a simulation study has been done. When the correct model is lognormal, then the miss-specification as the inverse Weibull (IW) model leads to larger values for ratios of biases and mean squared errors, so in this case miss-specification does not have a significant chance in practice. However, when the correct model is IW, there is a big chance for false specifying the lognormal model. Finally, this methodology is applied to determine the true distribution for a real data set of Covid-19 mortality rate in Germany.</p>Mohammad Mehdi SaberParnian HabibiMohammad Hossein ZarinkolahAbdussalam AljadaniMahmoud M. MansourMohamed S. HamedHaitham M. Yousof
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-09-022025-09-021462977299910.19139/soic-2310-5070-2343Bayes estimators for the parameters of truncated Campbell distribution using Lindley's approximation
http://47.88.85.238/index.php/soic/article/view/2423
<p>In this research, the Campbell distribution (maximum value) was truncated by deleting a part of the distribution domain so that the distribution function maintains its probability properties, to obtain the truncated Campbell distribution (maximum value) (TC). Also, the maximum likelihood function (MLE) and Bayes estimators for the scale and location parameters were derived using Lindley approximation with taking different loss functions, which are the squared loss function (SEL) and the general entropy function (GEL). We also used the simulation method to generate many sample sizes (n= 10, 60, 120, 150) with many different values of the scale parameter and the location parameter, and the estimators were compared using the mean square error (MSE) measure.</p>Najm Abed OleiwiEmad FarhoodNajlaa. A. Al-khairullah
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-09-222025-09-221463000301110.19139/soic-2310-5070-2423Analysing Volatility Persistence in the Nairobi Securities Exchange: The Role of Exchange and Interest Rates
http://47.88.85.238/index.php/soic/article/view/2480
<p>In this paper, the main objective was to analyse the influence of exchange and interest rates on volatility persistence using asymmetric GARCH models (EGARCH and TGARCH) on NSE data. The analysis of the relationship between stock return volatility, exchange, and interest rates on volatility persistence was performed using the models ARMA (1, 2) -EGARCH (1,1) and ARMA (1, 2) -TGARCH (1,1) under the student t distribution and the generalised error distribution assumption using the NSE daily 20-share price index, interest rates, and exchange rates from 02/01/2015 to 31/12/2024 accounting for 3106 observations. The degree of persistence in the conditional variance equations slightly increased for the ARMA(1,2)-TGARCH(1,1) model and there was a slight reduction for the ARMA(1,2)-EGARCH(1,1) with the inclusion of interest rate and exchange rate which was consistent regardless of the error term distribution assumption. Generally, information shocks increase volatility persistence, and negative shocks have a greater impact than positive shocks. The coefficient of the exchange rate ($\delta_2$) is positive and statistically significant for ARMA (1,2)-TGARCH (1,1). Hence, we deduce that the volatility in the NSE can be explained by the exchange rate, and there exists a positive relationship. Therefore, it is evident that stock returns are positively related to changes in exchange rates. The government should implement policy measures to control the exchange rate, such as real-time disclosure of financial information, trading volumes, and corporate actions, as these affect stock returns.</p>Edwin MoyoStanley JereChristian KasumoPeter Chidiebere NwokoloAnthony MulingeClement MwaangaWamulume Mushala
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-09-262025-09-261463012302510.19139/soic-2310-5070-2480The uniformly more powerful tests than the likelihood ratio test using intersection-union hypotheses for exponential distribution
http://47.88.85.238/index.php/soic/article/view/2496
<p>In practice, we may encounter hypotheses that the parameters under test have typical restrictions. These restrictions can be placed in the null or alternative hypotheses. In such a case, the hypothesis is not included in the classical hypothesis testing framework. Therefore, statisticians are looking for the more powerful tests, rather than the most powerful tests. A common method for such tests is to use intersection-union and union-intersection tests. In this paper, we derived the testing procedure of a simple intersection-union and compared it with the likelihood ratio test.</p> <p>We also compare the powers of two exponential sign tests, the rectangle test and smoother test, and the simple intersection-union test with the likelihood ratio test.</p>Zahra NiknamRahim Chinipardaz
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-09-262025-09-261463026304410.19139/soic-2310-5070-2496Quantile-based Study of Interval Inaccuracy Measures
http://47.88.85.238/index.php/soic/article/view/2636
<p>There are many models for which quantile function are available in tractable form, though their distribution functions are not available in explicit form. Recently researchers have great interest to study the quantile-based Interval entropy measures. In the present paper, we introduce quantile-based inaccuracy measure for doubly truncated random variable and study its properties. We also discuss some characterization results of this proposed measure. Further we propose and study the quantile version of Kullback -Leibler divergence measure for doubly truncated random variable. Finally, We discuss some results of proposed Kullback -Leibler divergence measure.</p>Seema TehlanVikas KumarAnjali Rathee
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-09-222025-09-221463045306510.19139/soic-2310-5070-2636A Two-Stage Design for Superior Efficiency in Estimating Sensitive Attributes
http://47.88.85.238/index.php/soic/article/view/2663
<p>Examining sensitive characteristics or data that individuals are hesitant to disclose in surveys poses a challenge due to the ethical duty to protect respondent privacy. Warner’s randomized response (RR) technique, while enabling confidential estimation of such attributes’ prevalence in populations, suffers from increased variance as the likelihood of directly probing sensitive questions rises. To address this limitation, we propose an innovative two-stage RR framework designed to enhance practicality and statistical efficiency compared to Mangat’s model, while improving trustworthiness in real-world applications. Privacy protection metrics were computed for the proposed models, with efficiency analyses consistently showing that the new model surpasses Mangat’s model in efficiency.</p>Ahmad Aboalkhair
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-07-282025-07-281463066307410.19139/soic-2310-5070-2663The Epanechnikov-Rayleigh Distribution: Statistical Properties and Real-World Applications
http://47.88.85.238/index.php/soic/article/view/2754
<p style="margin-left: 18.0pt; text-align: justify;">This article looks at the main statistical features of the Epanechnikov-Rayleigh Distribution, like moments and quintiles, and figures out the probability density function (PDF) and cumulative distribution function (CDF). Maximum likelihood estimation (MLE) is used for parameter estimation, and comprehensive simulation studies are used to assess the suggested distribution's performance. The superior goodness-of-fit of the Epanechnikov-Rayleigh distribution over the classical Rayleigh distribution and other competing models is demonstrated by real-world applications in reliability analysis and environmental modeling, as evaluated by the Bayesian Information Criterion (BIC) and Akaike Information Criterion (AIC).<span style="font-size: 14.0pt;"> </span></p>Naser Odat
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-10-012025-10-011463075308610.19139/soic-2310-5070-2754Advanced statistical methods for analyzing spatially varying relationships in overdispersed HIV case counts in East Java Province, Indonesia: GWRF vs. GWNBR
http://47.88.85.238/index.php/soic/article/view/2827
<div class="page" title="Page 1"> <div class="layoutArea"> <div class="column"> <p>This study investigates the efficacy of Geographically Weighted Random Forest (GWRF) compared to Negative Binomial Regression (NBR) and Geographically Weighted Negative Binomial Regression (GWNBR) in modeling spatially varying, overdispersed count data using HIV cases from East Java Province, Indonesia.<br>The dataset covers 38 regencies/cities and examines the relationship between HIV cases and five independent variables. GWNBR incorporates spatial weighting based on adaptive bisquare kernel function and Euclidean distance, while GWRF combines random forests with geographical weighting.<br>GWRF emerges as the superior model based on RMSE, MAPE, and R² values, outperforming NBR and GWNBR. GWRF identifies five groups based on the three most important predictor variables. In approximately 60\% of the region, the percentage of drug users ($X_2$), the percentage of individuals living in poverty ($X_4$), and the open unemployment rate ($X_5$) are identified as important variables. Notably, the percentage of drug users and the open unemployment rate are consistently associated with HIV cases across nearly all regions. This study offers valuable insights into HIV transmission patterns and associated risk factors across the province, contributing to a better understanding of the spatial distribution of HIV cases and informing targeted interventions and resource allocations.</p> </div> </div> </div>Yuliani DewiRenata WijayantiMohammad Fatekurrohman
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-09-242025-09-241463087310010.19139/soic-2310-5070-2827ECJ-LLM: From Extraction to Judgment; Reinventing Automated Review Annotation
http://47.88.85.238/index.php/soic/article/view/2836
<p>This paper introduces a novel multi-agent debate framework for interest-based data annotation, leveraging Large Language Models (LLMs) to facilitate structured, collaborative annotation. The framework orchestrates three specialized LLM agents—an aspect extractor, a critic, and a judge—who engage in a systematic debate: the extractor identifies relevant aspects, the critic evaluates and challenges these aspects, and the judge synthesizes their input to assign final, high-quality interest-level labels. This debate-driven process not only enhances annotation fidelity and context but also allows for flexible customization of models used in each role and the interest to be detected. To ensure transparency and quality, the framework incorporates an evaluation suite with metrics such as precision, recall, F1-score, and confusion matrices. Empirical results on a gold-standard hotel review dataset demonstrate that the framework outperforms single-agent methods in annotation quality. A customizable annotation tool, developed as a demonstration of the framework’s practical utility, further showcases its flexibility and extensibility for a range of annotation tasks.</p>Loukmane MAADAKhalid AL FARARNIBADRADDINE AGHOUTANEYousef FARHAOUIMohammed FATTAH
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-09-122025-09-121463101312510.19139/soic-2310-5070-2836Multivariate Cubic Transmuted Family of Distribution with Applications
http://47.88.85.238/index.php/soic/article/view/2853
<p>Multivariate distributions are useful in modeling several dependent random variables. It is difficult to develop a unique multivariate skewed distribution. There are different forms to the same distribution are available. For this reason, the research is ongoing into ways to construct multivariate families from univariate margins. In this paper, we have proposed a generalization of univariate cubic transmuted family to a multivariate family named a multivariate cubic transmuted (MCT) family. This new family applied to (p) baseline Weibull variables named multivariate cubic transmuted Weibull distribution (CTPW). Statistical properties of (CTPW) have been studied, and the parameters have been estimated by maximum likelihood (ML) method. A real data set for bone density test by photon absorption in the peripheral bones of olderly women fitted by (CTpW), trivariate transmuted Weibull (T3W) and FGMW distributions. The important theoretical conclusions are, the marginal distributions belong to multivariate cubic family with dimension less than p, joint moments of any order depend on raw moments of each baseline variable and moments of the largest order statistics of random samples of sizes two and three drawn from each baseline distribution. In real application, the (CT3W) is a better fit to bone density data.</p>Hayfa Abdul Jawad SaieedKhalida Ahmed MohammedMhasen Saleh Altalib
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-10-012025-10-011463126314310.19139/soic-2310-5070-2853OGA-Apriori: An Optimized Genetic Algorithm Approach for Enhanced Frequent Itemset Mining
http://47.88.85.238/index.php/soic/article/view/2320
<p>Frequent Itemset Mining (FIM) can be broadly categorized into two approaches: exact and metaheuristic-based methods. Exact approaches, such as the classical Apriori algorithm, are highly effective for small to medium-sized datasets. However, these methods face significant temporal complexity when applied to large-scale datasets. However, while capable of addressing larger datasets, metaheuristic-based approaches often struggle with precision. To overcome these challenges, researchers have explored hybrid methods that integrate the recursive properties of the Apriori algorithm with various metaheuristic algorithms, including Genetic Algorithms (GA) and Particle Swarm Optimization (PSO). This integration has led to the development of two prominent techniques: GA-Apriori and PSO-Apriori. Empirical evaluations across diverse datasets have consistently shown that these hybrid techniques outperform the traditional Apriori algorithm in both runtime and solution quality. Building upon this foundation, this study introduces an enhanced version of the GA-Apriori algorithm, Optimized GA-Apriori (OGA-Apriori), to improve runtime efficiency and solution accuracy. Comprehensive evaluations on multiple datasets demonstrate that the proposed OGA-Apriori approach achieves superior performance compared to the original GA-Apriori in both runtime and solution effectiveness.</p>BARIK MERYEM TOULAOUI AbdelfattahHAFIDI ImadROCHD Yassir
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-10-092025-10-091463144316110.19139/soic-2310-5070-2320An Ai-Based Intelligent Approach for Credit Risk Assessment
http://47.88.85.238/index.php/soic/article/view/2509
<p>Credit risk poses a substantial problem to the banking and financial industries, especially when borrowers fail to satisfy their repayment commitments. Conventional approaches have various challenges in effectively anticipating credit risk evaluations, including the incidence of fraudulent activity. Therefore, to avoid these problems, a new approach called the Pigeon U Net Prediction System (PUNPS) has been developed for credit risk prediction and classification. The credit card transaction dataset was collected using the Kaggle platform. The dataset was then preprocessed to remove duplicate items. The feature selection approach was used to keep only relevant variables. Credit risk prediction was successfully carried out using the fitness function of pigeon optimization. Furthermore, the classified credit risk forecasts were processed with the U-Net framework. Finally, the performance of the model was evaluated, and the findings were compared with standard approaches. This method offers significant advantages compared to conventional models, demonstrating improved performance in predicting credit risk through improved accuracy. The performance of this model is evaluated through various risk assessment metrics such as F1 score, Precision, recall, Accuracy, and error rate. It shows an impressive accuracy of 99.8%, accompanied by a precision score of 99.9% and a recall score of 99.7%. An F1 score of 99.6% confirms its effective balance between Precision and recall, establishing it as a reliable and accurate tool for credit risk assessment. In addition, it maintains a minimal error rate of only 0.2%.</p>Mounica YenugulaVinay Kumar KasulaBhargavi KondaAkhila Reddy YadullaChaitanya Tumma
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-08-282025-08-281463162317710.19139/soic-2310-5070-2509Risk Management Strategies in a Dependent Perturbed Compound Poisson Model
http://47.88.85.238/index.php/soic/article/view/2632
<p>This paper deals with the optimal risk management strategies for an insurer with a diffusion approximation of dependent compound Poisson process who wants to maximize the expected utility by purchasing proportional reinsurance and managing reinsurance counterparty risk with investment and he/she can invest in the financial market and in a risky asset such as stocks. It is assumed that this dependent risk model consists of the constant reinsurance premium rate, combination of the number of claims occurring by policyholders within a finite time, and perturbed by correlated standard Brownian motions, where the price of the risk-free bond is described by a stochastic differential equation. We use the alternative real measure technique to derive the opti-mal strategies and solution of the associated Hamilton-Jacobi-Bellman equation for the optimization problem which is formed by the expectation of combination of financial market factors and an exponential utility function. We prove the verification theorem to guarantee the optimal strategy. Finally, some numerical illustrations are presented to analyze our theoretical results and investigate the sensitivity of optimal strategies on some parameters.</p>Abouzar Bazyari
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-09-182025-09-181463178319810.19139/soic-2310-5070-2632A Fine-Funed CNN for Multiclass Classification Of Brain tumors On Figshare CE-MRI and its Raspberry Pi Deployment
http://47.88.85.238/index.php/soic/article/view/2635
<p>This paper introduces a fine-tuned Convolutional Neural Network (CNN) for multiclass classification of brain tumors on contrast-enhanced T1-weighted MRI scans. The proposed model integrates batch normalization, dropout, and lightweight convolutional blocks to extract discriminative features while maintaining computational efficiency suitable for embedded deployment. Experiments were conducted on the Figshare dataset comprising 3,064 MRI slices from 233 patients with gliomas, meningiomas, or pituitary tumors. Images were preprocessed through resizing, normalization.</p> <p>The model was trained using the Adam optimizer with a learning rate of 1e-4, a batch size of 32, and 100 epochs. Evaluation metrics included accuracy, precision, recall, and F1-score. The fine-tuned CNN achieved an overall accuracy of 94.08%, with class-specific performance indicating strong results for pituitary tumors (precision 95.65%, recall 95.96%) and meningiomas (precision 90.20%, recall 88.81%), while glioma classification showed high sensitivity (recall 96.85%) but lower precision (75.00%). To validate real-world applicability, the model was converted to TensorFlow Lite and deployed on a Raspberry Pi 4, achieving an inference time of approximately 60 ms per image.</p> <p>These findings demonstrate that fine-tuned CNNs can offer a competitive and resource-efficient solution for computer-aided diagnosis of brain tumors, balancing accuracy and practicality in clinical environments with limited computational resources.</p>Alaeddine HmidiNeji KoukaLina Tekari
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-09-272025-09-271463199321310.19139/soic-2310-5070-2635Optimal Integration of Charging Station in Power Grids using Hybrid Optimized Recursive Neural Approach
http://47.88.85.238/index.php/soic/article/view/2697
<p>The rapid increase in Electric Vehicle (EV) use has necessitated the construction of countless charging stations, which call for grid services and sophisticated controllers to charge. However, establishing a more effective charging schedule continues to pose significant challenges. To address this problem, a new technique called Crayfish–Lotus Optimized Recursive Model (CLORM) was utilized to arrange the distribution system's Electric Vehicle Charging Stations (EVCS) as efficiently as possible. The distributed generation system uses renewable energy sources to provide a reliable and sustainable power supply. It includes battery energy storage, hydroelectric power, wind turbines, and solar photovoltaic systems. The system's optimal placement for EVCS is determined based on low power losses in the distributed system. The system's effectiveness and resilience are tested using the IEEE 33-bus system, ensuring balanced and unbalanced power distribution and stability. The evaluation emphasizes parameters such as voltage, Total Harmonic Distortion (THD), power loss, and processing time. The developed model demonstrates a power loss of 206.7320 kW.</p>S P R Swamy PolisettyDr. R. JayanthiDr. M. Sai Veerraju
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-09-302025-09-301463214323410.19139/soic-2310-5070-2697Images Restoration Based on a New Optimal Parameter to Conjugate Gradient Method
http://47.88.85.238/index.php/soic/article/view/2799
<p>Image denoising plays a vital role in numerous image processing applications. This research presents a novel two-phase conjugate gradient method tailored for mitigating impulse noise. The approach leverages a center-weighted median filter, which adaptively identifies noise-affected pixels and applies the conjugate gradient technique to restore them. The method focuses on minimizing a specific functional that maintains edge integrity while reducing noise candidates. One of the key advantages of this technique is its descent-based search mechanism, with the possibility of achieving global convergence through the Wolfe line search conditions. Experimental evaluations demonstrate the method’s effectiveness in removing impulse noise using a spectral conjugate gradient approach.</p>Yousif Ali MohammedBasim A. Hassan
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-10-082025-10-081463235324310.19139/soic-2310-5070-2799Economic Dispatch of Thermal Generators via Bio-Inspired Optimization Techniques
http://47.88.85.238/index.php/soic/article/view/2812
<p class="p1">This paper addresses the economic dispatch problem in thermal power systems using four metaheuristic optimization algorithms: Particle Swarm Optimization (PSO), Crow Search Algorithm (CSA), Salp Swarm Algorithm (SSA), and JAYA algorithm. A deterministic formulation is adopted to minimize the total generation cost over a 24-hour horizon while meeting generator operating constraints and ensuring load balance. A randomly generated dispatch strategy is also included as a baseline. Each algorithm is independently executed 100 times to evaluate robustness, repeatability, and associated CO<span class="s1">2 </span>emissions. Among all methods, PSO achieves the best performance, yielding the lowest total dispatch cost of $82,412.78 and the smallest relative standard deviation (0.12%), along with total CO<span class="s1">2 </span>emissions of 1901.65 kg. Compared to other techniques, PSO provides cost improvements of 0.20% over CSA, 0.28% over SSA, 0.94% over JAYA, and a substantial 29.23% reduction with respect to the random baseline. Moreover, all metaheuristic strategies significantly outperform the random dispatch, demonstrating their ability to generate high-quality and feasible solutions. The PSO-based dispatch strategy efficiently allocates hourly power outputs within technical constraints, introducing a controlled overgeneration margin to compensate for system losses. These results confirm the effectiveness of metaheuristic approaches in complex power system optimization tasks and establish a foundation for future work involving renewable integration, emission constraints, and uncertainty modeling.</p>Cristian Patiño-CatañoRubén Iván BolañosJhony Andrés Guzmán-HenaoLuis Fernando Grisales-NoreñaOscar Danilo Montoya
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-09-102025-09-101463244326610.19139/soic-2310-5070-2812Electrical parameter estimation in solar cells using single-, double-, and three-diode models
http://47.88.85.238/index.php/soic/article/view/2856
<p data-start="85" data-end="622">Accurately modeling photovoltaic (PV) systems is essential for performance optimization and reliability assessment in renewable energy applications. This study proposes a novel hybrid methodology for parameter estimation in single-, double-, and three-diode PV models, which combines the Equilibrium Optimization Algorithm (EOA) with the Newton-Raphson method to solve the implicit model equations. This approach was implemented in Python and validated using experimental current-voltage (I-V) data from the Kyocera KC200GT solar module.</p> <p data-start="624" data-end="1182">The objective function aimed to minimize the root mean square error (RMSE) between simulated and measured curves, wherein current values were numerically computed via the Newton-Raphson method for each candidate solution. To evaluate the performance of the models, comparisons were carried out under standard testing conditions (STC) with an irradiance level of 1000 W/m². The double-diode model reported the lowest RMSE value under these conditions (RMSE = 0.0416 A), confirming its superior accuracy and adequate balance between complexity and performance.</p> <p data-start="1184" data-end="1687">Additionally, two lower irradiance levels (800 W/m² and 400 W/m²) were analyzed in order to assess the consistency of the estimated parameters, i.e., the series resistance (Rs), shunt resistance (Rsh), and ideality factors (n₁, n₂, n₃). This extended analysis revealed that the Rsh parameter exhibits high variability among the three models, with STC showing the greatest deviation (63.28). This further supports the robustness of the proposed method, particularly in the case of the double-diode model.</p> <p data-start="1689" data-end="1833">Overall, the hybrid EOA–Newton–Raphson strategy provides a reliable and flexible framework for nonlinear parameter identification in PV systems.</p>Jenny Catalina Garzón-AcostaOscar Danilo Montoya GiraldoCésar Leonardo Trujillo Rodríguez
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-09-272025-09-271463267328110.19139/soic-2310-5070-2856Stock Price Forecasting Using Gaussian-ADAM Optimized Long Short-Term Memory Neural Networks
http://47.88.85.238/index.php/soic/article/view/2876
<p>This study introduces a novel optimization algorithm, \textbf{ItoAdam}, which integrates stochastic differential calculus—specifically Itô's lemma—into the standard Adam optimizer. Although Adam is widely used for training deep learning models, it may fail to converge reliably in complex, non-convex settings. To address this, ItoAdam injects Brownian noise into the gradient updates, enabling probabilistic exploration of the loss surface and improving the ability to escape poor local minima. ItoAdam is applied to train Long Short-Term Memory (LSTM) neural networks for stock price forecasting using historical data from 13 major companies, including Google, Nvidia, Apple, Microsoft, and JPMorgan. Theoretical analysis confirms the \textbf{convergence} of the proposed method under mild conditions, ensuring its robustness for deep learning applications. In addition, a \textbf{Differential Evolution (DE)} algorithm is employed to automatically optimize critical LSTM hyperparameters such as hidden size, number of layers, bidirectionality, and noise standard deviation. Experimental results show that the ItoAdam-LSTM model consistently outperforms the standard Adam-LSTM approach across evaluation metrics including RMSE, MAE, and $R^2$. A detailed sensitivity analysis reveals that optimal forecasting accuracy is typically achieved when the noise standard deviation lies between \textbf{$2.1 \times 10^{-4}$} and \textbf{$2.9 \times 10^{-4}$}. These findings highlight the effectiveness of combining Itô-driven stochastic optimization with evolutionary search and recurrent architectures for robust financial time series prediction in noisy and nonstationary environments.</p>Hamza LahbabiZakaria BouhanchKarim El moutaouakilVasile PaladeAlina-Mihaela Patriciu
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-10-062025-10-061463282329810.19139/soic-2310-5070-2876Influence Diagnostics in Gamma regression model using secretary bird optimization algorithm
http://47.88.85.238/index.php/soic/article/view/2899
<p>The diagnosing of influence is essential to assist in detecting influential observations, which influences the inference, especially estimation of the model. Classical diagnostic tools like Cooks Distance and DFFITS are well established and it is possible that less appropriate in model complexities or under different dispersion conditions of data. In the current paper, a novel effort to advance the area of influence diagnostics to the Gamma regression models (GRM) is proposed to utilize the metaheuristic approach as called the Secretary Bird Optimization Algorithm (SBOA). To compare the GRM detection ability of TC and MRE of Cook s Distance and DFFITS and the SBOA based SMOX approach, we run an extensive simulation study across sample sizes and dispersion parameters. The results of the simulations prove that the Cook Distance and the DFFITS are reliable but SBOA-ameliorated diagnostic scheme perform better to detect influential cases particularly in high dispersion scenarios and a limited to moderate samples. Viewed through compared analysis, it can be said SBOA offers a more thorough detection mechanism.</p>Luay Adil AbduljabbarSabah Manfi Ridha
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-10-102025-10-101463299330910.19139/soic-2310-5070-2899Almost sure asymptotic stability of fractional stochastic nonlinear heat equation
http://47.88.85.238/index.php/soic/article/view/2593
<p><span class="fontstyle0">Recently, the fractional stochastic nonlinear heat equation in the Hilbert space L<span class="fontstyle1">2</span><span class="fontstyle2">(0</span>; <span class="fontstyle2">1)</span><span class="fontstyle4">, driven by the fractional power of the Laplacian and perturbed by a <br></span> trace-class noise has been studied by the first and the last authors. They have proved the wellposedness, the <span class="fontstyle2">p</span><span class="fontstyle3">th</span>-moment exponential stability and the almost surely exponential stability of such problem in the semigroup framework. The current work is considered as a continuation of the previousely mentioned paper. More particularly, we establish the almost sure asymptotic stability under the same conditions imposed in our recent work, besides a regularity of the initial condition. Finally, some examples are provided to illustrate the obtained theory.<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br></span></p>Zineb ARABAmel REDJILMahmoud Mohamed El BORAI
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-09-282025-09-281463310332010.19139/soic-2310-5070-2593 An Efficient Approach to Detecting a Copy-Move Forgery in Videos
http://47.88.85.238/index.php/soic/article/view/2677
<p>In the past years, the advancement in editing technologies, especially multimedia, has contributed to increased image and video modification use. However, object-based video tampering, such as including or excluding objects in the video frames, poses a major challenge to video authentication. Since multimedia content is commonly used across numerous fields, video forgery detection is critical to maintaining media integrity. Regarding different types of manipulations in the domain of videos, copy-move forgery is typical and rather difficult at the same time. This study introduces a relatively efficient DCNN model to detect forged copy-move videos. The proposed method can be described as follows: first, the video is divided into frames, and then a convolutional neural network model is employed to traverse each frame to establish its features. We then use the described features to train a new CNN model, making it possible for us to determine whether a particular frame is real or fake. Also, the structure incorporates batch normalization to enable easy layer weight initialization, ease in training at a higher learning rate, high accuracy, non-overfitting, and process stability. We also conducted extensive practical experiments on a massive dataset of videos, which included both original and manipulated content. We use specific performance metrics like accuracy, precision, recall, F1-score, and the Matthews Correlation Coefficient to assess the performance of the suggested model. The recommended method demonstrated superior performance compared to all previously proposed methods on the GRIP, VTD, and SULFA datasets. The model’s accuracy was 95.60%, 96.70%, and 100%, with the shortest time of 25.10 sec, 27.35 sec, and 20.22 sec, respectively.</p>Mahmoud AtefMohamed FaragMohammed Abdel RazekGaber Hassan
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-09-292025-09-291463321334510.19139/soic-2310-5070-2677Some results of a random $(b, \theta)$-Enriched contraction with application on non-linear stochastic integral equations
http://47.88.85.238/index.php/soic/article/view/2714
<p>In this paper, we propose a random $(b, \theta)$-enriched contraction operator and prove an existence theorem of random fixed points for this operator. Moreover, we establish an existence result for a solution to a nonlinear stochastic integral equation of Hammerstein type.</p>Krissada YajaiOrapan JanngamWiroj MongkolthepPhachara Saipara
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-10-072025-10-071463346335810.19139/soic-2310-5070-2714Diabetes prediction based on Ensemble Methods
http://47.88.85.238/index.php/soic/article/view/2771
<p>The incidence of diabetes, a chronic disease, is increasing worldwide, especially in low- and middle-income countries. To reduce complications and improve patient outcomes, early and accurate prediction is critical. Using two benchmark datasets, this test demonstrates an ensemble-based machine learning framework for diabetes prediction. Two ensemble strategies were evaluated using the Diabetes Prediction dataset and the Indian Diabetes Pima dataset: a sequential ensemble combining XGBoost, gradient boosting, and AdaBoost, and a parallel ensemble using a smooth voting classifier that encompassed logistic regression, decision tree, and K-Nearest Neighbors. forward feature selection strategies were used to find the most relevant predictors, improving model performance and generalizability. 70% of the data was used for training, 15% for validation, and 15% for testing. According to the experimental results, the sequential ensemble performed better on the Indian Pima dataset, achieving a training accuracy of 98.95%, a validation accuracy of 97.59%, and an F1 accuracy of 97.77%. This performance was better than the parallel ensemble, which achieved an F1 score of 96.62%, a validation accuracy of 96.38%, and a training accuracy of 98.16%. Overall, the sequential model outperformed both datasets, with the diabetes prediction dataset showing better performance than the parallel model. These results demonstrate how feature selection methods and boosting-based ensemble models can work together to create accurate and reliable medical prediction systems.</p>Jihan Askandar MosaAdnan Mohsin Abdulazeez
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-10-042025-10-041463359337910.19139/soic-2310-5070-2771An Interpretable Deep Learning Framework for Multi-Class Dental Disease Classification from Intraoral RGB Images
http://47.88.85.238/index.php/soic/article/view/2880
<p>Dental anomalies and diseases are among the most prevalent health concerns world-wide, and their early and precise diagnosis is critical to ensuring effective treatment and improved patient outcomes. Traditional diagnostic approaches, particularly conventional radiography, are often time-consuming and may not provide sufficient diagnostic accuracy. To address these limitations, this study proposes a robust deep learning framework for the automated classification of dental conditions from intraoral RGB images. Three publicly available datasets—Oral Diseases, Oral Infection, and Teeth Dataset—covering a broad spectrum of dental anomalies were utilized. Five state-of-the-art convolutional neural<br>network (CNN) architectures, namely Efficient-NetB3, EfficientNetB0, ResNet50, DenseNet121, and InceptionV3, were systematically evaluated using a unified transfer learning pipeline. Techniques such as stratified 5-fold cross-validation, ensemble inference, focal loss, class weighting, and label smoothing were employed to enhance generalization and mitigate class imbalance. EfficientNetB3 emerged as the optimal model, achieving accuracies of 95.4%, 89.9%, and 99.3% on the three datasets, with Kappa values reaching 0.989. Grad-CAM visualizations confirmed clinically meaningful feature localization, strengthening interpretability. The proposed framework demonstrates strong potential for integration into<br>intelligent clinical decision-support systems, offering an optimal balance between diagnostic accuracy, computational efficiency, and transparency to assist dental practitioners in timely and reliable decision-making.</p>Dawlat Abdulkarim AliHaval Tariq Sadeeq
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-10-072025-10-071463380339710.19139/soic-2310-5070-2880From Click to Checkout: Deep Learning for Real-Time Fraud Detection in E-Payment Systems
http://47.88.85.238/index.php/soic/article/view/2891
<p>The rapid expansion of e-commerce has been paralleled by a significant increase in electronic payment (e-payment) transactions, bringing forth pressing challenges in maintaining transactional security. This paper addresses the critical issue of e-payment fraud in e-commerce by leveraging deep learning techniques for real-time fraud detection. With the growing sophistication of fraudulent activities, traditional rule-based fraud detection systems are proving inadequate, necessitating more advanced and adaptable solutions. This study proposes a deep learning model, specifically designed to enhance e-payment security by efficiently identifying fraudulent transactions. The model addresses key challenges such as class imbalance in transaction data and the need for real-time processing capabilities. Through a comprehensive methodology involving data preprocessing, model architecture design, training, and evaluation, the paper demonstrates the effectiveness of deep learning in detecting complex fraud patterns with high accuracy. The findings highlight the potential of deep learning to significantly improve the security of e-payment systems in e-commerce, thereby bolstering consumer trust and the overall integrity of online transactions. This research contributes to the evolving landscape of e-commerce security, offering insights and directions for future advancements in fraud detection technologies.</p>Raouya El YoubiFayçal Messaoudi Manal Loukili Riad Loukili
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-09-172025-09-171463398340810.19139/soic-2310-5070-2891Restrained Domination Coalition Number of Paths and Cycles
http://47.88.85.238/index.php/soic/article/view/2962
<p>A restrained domination coalition (or simply rd<sub>c</sub>) consists of two disjoint subsets of vertices R<sub>1</sub> and R<sub>2</sub> of a graph G<sub>h</sub>. Neither R<sub>1</sub> nor R<sub>2</sub>, on its own, is a restrained dominating set (RD-set). However, when combined, they together form an RD-set for the graph. A restrained domination coalition partition (rd<sub>cp</sub>) is a vertex partition πr = {R<sub>1</sub>,R<sub>2</sub>, ..,R<sub>l</sub>} where each element of Ri ∈ π<sub>r</sub> is either an RD-set consisting of a single vertex, or a non-RD-set that forms an rd<sub>c</sub> with a set R<sub>j</sub> in π<sub>r</sub>. In this work, we initiated the concept of rd<sub>c</sub> and rd<sub>c</sub>-graph. We further proved the existence of rd<sub>c</sub> for any simple graph. Moreover, we determine the exact value of this parameter in special graph families such as complete multipartite graphs, paths and cycles, while establishing the relation between rd<sub>c</sub>-number and graph invariants like vertex degree. We further characterized the rd<sub>c</sub>-graphs of paths. This study applies rd<sub>c</sub>-partitioning to cybersecurity, structuring networks into collaborative security clusters that detect, contain, and neutralize threats in real time.</p>A.H. Shola Nesam S. AmuthaN. Anbazhagan
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-09-262025-09-261463409341710.19139/soic-2310-5070-2962Exploring Nonlinear Reaction Kinetics in Porous Catalysts: Analytical and Numerical Approaches to LHHW Model
http://47.88.85.238/index.php/soic/article/view/2976
<p>The article examines a mathematical model for porous catalysts incorporating nonlinear reaction kinetics. Central to this model is the nonlinear steady-state reaction-diffusion equation. The Taylor series method derives the analytical solution for species concentration in various nonlinear Langmuir-Hinshelwood-Haugen-Watson (LHHW) models, each characterized by distinct fundamental rate functions. From this analysis, we derive both straightforward and approximate polynomial expressions for concentration and effectiveness factors. Furthermore, we compare numerical simulations to the analytical approximations, demonstrating a strong correlation between the numerical results and theoretical predictions. We also compute the concentration and effectiveness factors for the LHHW-type models. The analytical solutions offer valuable insights for optimizing catalytic and biochemical system designs, such as fixed/fluidized-bed reactors, fuel cells, and catalytic converters. They support advances in sustainable chemical production, wastewater treatment, biomedical devices, and energy systems. These results reduce reliance on trial-and-error methods, enabling cost-effective scale-up and improved catalyst longevity. Overall, the findings align well with the aim of statistics, optimization, and information computing for efficient system modeling and design.</p>Regunathan RajalakshmiLakshmanan RajendranSethu Naganathan
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-09-302025-09-301463418343510.19139/soic-2310-5070-2976Intuitionistic L-Fuzzy Soft Ideal Over Semirings
http://47.88.85.238/index.php/soic/article/view/2977
<p>Consist of ideas of Intuitionitic L-fuzzy ideals of a ring and (α, β)-cut of Intuitionistic L-fuzzy soft<br>ideals described the relation between the Intuitionistic L-fuzzy soft semi-ideal and Intuitionistic L-fuzzy soft semiring<br>homomorphism. Some results are found using regularity concepts and also analyzed.</p>R. SAKTHIVEL S. NAGANATHAN M. ANITHA
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-10-092025-10-091463436344610.19139/soic-2310-5070-2977An Explainable Vision Transformer-Based Web Application for Medical Decision-Making: Case of Colon Cancer
http://47.88.85.238/index.php/soic/article/view/3035
<p>Despite the impressive research related to the application of artificial intelligence in the medical field, its adoption in real clinical settings, especially in medical decision-making, remains very limited. Therefore, our objective in this work is to develop a deep learning-based web application that supports medical decision-making. In addition to enabling efficient interaction and knowledge sharing among medical professionals, our web application also provides an accurate prediction system for colon cancer. This system is based on a Vision Transformer (ViT) deep learning model, which is characterized by its attention mechanism that ensures rich contextual representations and captures long-distance dependencies within images. To promote physicians’ confidence in the intelligent system, our approach provides clear visual explanations of the ViT predictions using the XAI method LIME. The validation of our model was conducted on a merged dataset of LC25000 and DigestPath images, with an additional external evaluation on the EBHI-Seg dataset. The experimental results demonstrate the competitive performance of the proposed ViT-based approach, which achieved perfect accuracy on the LC25000 dataset, 94.90% on the challenging merged dataset, and a robust accuracy of 92.17% on the unseen EBHI-Seg dataset. This remarkable performance makes the model suitable for real-world clinical applications.</p>Mohamed Abderraouf Ferradji Asma MerabetFaycal ZetoutouSamir Balbal
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-11-042025-11-041463447346710.19139/soic-2310-5070-3035Efficient GRU-based Facial Expression Recognition with Adaptive Loss Selection
http://47.88.85.238/index.php/soic/article/view/3043
<p>As real-world deployment of facial expression recognition systems becomes increasingly prevalent, computational efficiency emerges as a critical consideration alongside recognition accuracy. Current research demonstrates pronounced emphasis on accuracy maximization through sophisticated convolutional architectures, yet systematic evaluation of efficiency-performance trade-offs remains insufficient for practical deployment scenarios. This paper addresses this gap through comprehensive analysis of recurrent neural network architectures for facial expression recognition, specifically comparing Gated Recurrent Unit (GRU) and Long Short-Term Memory (LSTM) implementations within a novel one-vs-all classification framework incorporating adaptive loss function selection. A rigorous 2 × 2 × 2 factorial experimental design systematically evaluates architecture (GRU vs LSTM), optimization strategy (Bayesian vs predefined), and loss function complexity (standard vs advanced with auto-selection) across six basic emotions using the CK+ dataset with MediaPipebased facial landmark features. The investigation reveals that GRU architectures achieve statistical performance equivalence with LSTM while demonstrating 25% computational efficiency advantage (relative complexity 0.75 vs 1.0). The proposed adaptive loss selection mechanism automatically selects focal loss for severe class imbalance (ratio > 11.5), weighted binary cross-entropy for moderate imbalance (ratio 3.5-11.5), and standard binary cross-entropy otherwise. System performance achieves 92.7% ± 5.0% overall accuracy, with per-emotion F1-scores exhibiting substantial variability from 0.215 (fear) to 0.967 (surprise). Comprehensive statistical analysis incorporating power analysis and practical equivalence testing demonstrates optimization strategy equivalence across 25% of evaluated metrics, while architectural comparisons reveal non-equivalence despite similar performance levels. The study acknowledges significant limitations including critically small sample size (n=6 per condition), single dataset validation, and theoretical rather than empirical efficiency validation. These findings provide evidence-based guidelines for architecture selection in resource-constrained facial expression recognition applications, with the adaptive loss selection framework representing a significant methodological contribution for addressing class imbalance challenges in emotion recognition systems.</p>Sri WinarnoFarrikh AlzamiDewi Agustini SantosoMuhammad NaufalHarun Al AziesRivaldo Mersis BriliantoKalaiarasi A/P Sonai Muthu
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-11-182025-11-181463468349910.19139/soic-2310-5070-3043The Generalized Log-Adjusted Polynomial Family for Reliability and Medical Risk Analysis under Different Non-Bayesian Methods: Properties, Characterizations and Applications
http://47.88.85.238/index.php/soic/article/view/3002
<p><span class="fontstyle0">This paper introduces a novel class of continuous probability distributions called the genralized Log-Adjusted </span><span class="fontstyle0">Polynomial (GLAP) family, with a focus on the GLAP Weibull distribution as a key special case. The proposed family </span><span class="fontstyle0">is designed to enhance the flexibility of classical distributions by incorporating additional parameters that control shape, </span><span class="fontstyle0">skewness, and tail behavior. The GLAP Weibull model is particularly useful for modeling lifetime data and extreme events </span><span class="fontstyle0">characterized by heavy tails and asymmetry. The paper presents the mathematical formulation of the new family, including its </span><span class="fontstyle0">cumulative distribution function, probability density function, and hazard rate function. It also explores structural properties </span><span class="fontstyle0">such as series expansions and tail behavior. Risk analysis is conducted using advanced key risk indicators (KRIs), including </span><span class="fontstyle0">Value-at-Risk (VaR), Tail VaR (TVaR), and tail mean-variance (TMVq), under various estimation techniques. Estimation </span><span class="fontstyle0">methods considered include maximum likelihood (MLE), Cramer–von Mises (CVM), Anderson–Darling (ADE), and their </span><span class="fontstyle0">right-tail and left-tail variants. These methods are compared using both simulated and real insurance data to assess their </span><span class="fontstyle0">sensitivity to tail events. Finally, the paper provides a comprehensive analysis of risks in the field of reliability and in the </span><span class="fontstyle0">medical field. The analysis included examining engineering and medical risks using the aforementioned estimation methods </span><span class="fontstyle0">and considering a variety of confidence levels based on five risk measurement and analysis indicators.</span></p>Mujtaba HashimG. G. HamedaniMohamed IbrahimAhmad M. AboAlkhairHaitham M. Yousof
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-11-072025-11-071463500352510.19139/soic-2310-5070-3002On a new stacked ensemble framework for imputing missing data in the presence of outliers
http://47.88.85.238/index.php/soic/article/view/2894
<p>Missing value imputation (MVI) presents a real challenge which becomes more complicated in the presence of outliers. Although ensemble techniques such as bagging and boosting have been employed for MVI and have shown promising results, stacking has not been investigated in this area, despite its efficiency in prediction tasks. To address this gap, two robust stacking frameworks are proposed for imputing missing data in the presence of outliers, namely RKSF-IM and RESF-IM. These proposed frameworks begin by adding an outlier indicator. Then they employ two different stacking configurations, where MissForest, IRMI, and EM are the base learners, and their predicted values are used as inputs in ridge regression, which acts as a meta learner in the second layer. The RMSE, MAE, and Wasserstein distance metrics of the proposed frameworks are evaluated against those of the mean, median, XGBoost, EM, IRMI, KNN, MissForest, and SVM imputation methods using a simulation study and two real data applications. The simulation study considers different scenarios for missing rates and outliers. The study also investigates the impact of adding an outlier indicator on the performance of the different imputation methods. The proposed stacking configurations show better performance, under the simulation settings, than the competing methods in most scenarios. In addition, many existing imputation methods are further improved by including an outlier indicator variable.</p>Mahmoud Abdel-FattahMai MohsenAmany Mousa
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-10-202025-10-201463526354510.19139/soic-2310-5070-2894Improving Facial Expression Recognition in real-world Environments
http://47.88.85.238/index.php/soic/article/view/3171
<p>Facial expressions serve as fundamental cues for understanding human emotions and are a key component of affective computing. Recent advances in deep learning, especially Convolutional Neural Networks (CNNs), have made automated emotion recognition increasingly accurate and scalable. This paper introduces DCRNet, a hybrid deep neural network architecture designed to improve Facial Expression Recognition (FER) under real-world conditions such as occlusion, pose variation, and lighting inconsistency. The network integrates a pre-trained DenseNet121 backbone, multiple Convolutional Block Attention Modules (CBAM), and residual connections to enhance discriminative learning and gradient flow. Preprocessing employs adaptive gamma correction and facial landmark localization, ensuring optimal photometric normalization and emphasis on expressive regions of the face. Comprehensive experiments demonstrate that DCRNet achieves accuracies of 65.80%, 98.98% and 96.25% on the AffectNet, CK+, and KDEF datasets, respectively. It outperforms several recent FER models while maintaining a compact footprint of 11.6 million parameters. Cross-validation across different datasets confirms strong generalization. Statistical significance testing (McNemar and bootstrap analysis) verifies that performance gains are not due to random initialization. Further evaluation includes inference latency, FLOPs, and energy usage on GPU and ARM devices, confirming suitability for edge deployment. Finally, ethical and bias considerations are discussed to ensure responsible use in healthcare, education, and human-machine interaction.</p>Mohamed Abdeldayem Wael Badawy Hesham F. A. HamedAmr M. Nagy
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-11-162025-11-161463546356410.19139/soic-2310-5070-3171Enhancing Fraud Detection in Health Insurance: Deep Neural Network Approaches and Performance Analysis
http://47.88.85.238/index.php/soic/article/view/3097
<p>This study develops and examines a comprehensive deep learning framework for the detection of multi-class healthcare fraud in National Health Insurance Scheme (NHIS) claims. We examined 20,388 NHIS healthcare claims revealing four specific fraud patterns: Phantom Billing, Wrong Diagnosis, Ghost Enrollee, and legitimate claims. Four different deep neural network architectures were developed and evaluated: Simple NN, Deep Wide NN, Regularized NN, and Residual NN, in addition to ensemble methods. The Simple Neural Network achieved the highest overall performance, with a test accuracy of 79.84% and an F1-macro score of 77.76%. Despite possessing only 100,324 parameters (five times fewer than the Wide Deep Neural Network), it outperformed more complex designs while achieving the fastest training time of 40.61 seconds. Multiclass analysis demonstrated exceptional performance in Ghost Enrollee detection (97.84% F1-score) and moderate performance in Phantom Billing detection (61.15% F1-score).</p>Gaber Sallam Salem Abdalla Mohamed F. AboueleneinHatem M. Noaman
Copyright (c)
2025-11-102025-11-101463565358810.19139/soic-2310-5070-3097ContraSoft Set and ContraRough Set with using Upside-down logic
http://47.88.85.238/index.php/soic/article/view/3022
<p><span class="fontstyle0">A Soft Set is a parameterized family of subsets of a universe, where each parameter selects elements relevant<br>under that condition. A Rough Set is an approximation framework that employs lower and upper sets to<br>capture definite and possible memberships under an indiscernibility relation. Recent research has explored<br>extensions of Soft Sets and Rough Sets through concepts such as Hyper, SuperHyper, and Tree structures. In<br>this paper, we investigate the </span><span class="fontstyle2">ContraSoft Set </span><span class="fontstyle0">and </span><span class="fontstyle2">ContraRough Set</span><span class="fontstyle0">, which enrich Soft Sets and Rough Sets by<br>incorporating the notion of contradiction values. We further examine their mathematical structures, provide<br>simple illustrative applications, and discuss how these constructions can be applied within the framework of<br>up-side-down logic.</span></p>Takaaki FujitaRaed HatamlehAhmed Salem Heilat
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-11-102025-11-101463589363210.19139/soic-2310-5070-3022Examining the One-Step Implicit Scheme with Fuzzy Derivatives to Investigate the Uncertainty Behavior of Several First-Order Real-Life Models
http://47.88.85.238/index.php/soic/article/view/2250
<p>Differential Equations (DE) are useful for representing a variety of concepts and circumstances. However, when considering the initial or boundary conditions for these DEs models, the usage of fuzzy numbers is more realistic and flexible since the parameters can fluctuate within a certain range. Such scenarios are referred to be unexpected conditions, and they introduce the idea of uncertainty. These issues are dealt with using fuzzy derivatives and fuzzy differential equations (FDEs). When there is no precise solution to FDEs, numerical methods are utilized to obtain an approximation solution. In this study, the One-step Implicit Scheme (OIS) with a higher fuzzy derivative is extensively used to discover optimum solutions to first-order FDEs with improved accuracy in terms of absolute accuracy. We evaluate the method competency by investigating first-order real-life models with fuzzy initial value problems (FIVPs) in the Hukuhaa derivative category. The principles of fuzzy sets theory and fuzzy calculus were utilized to give a new generic fuzzification formulation of the OIS approach with the Taylor series, followed by a detailed fuzzy analysis of the existing problems. OIS is acknowledged as a practical, convergent, and zero-stable with absolute stability region approach for solving linear and nonlinear fuzzy models, as well as a useful methodology for properly managing the convergence of approximate solutions. The developed scheme capabilities is proved by providing approximate solutions to real-life problems. The numerical findings demonstrate that OIS is a viable and transformative approach for solving linear and nonlinear first-order FIVPs. The results provide a concise, efficient, and user-friendly approach to dealing with larger FDEs.</p>Kashif HussainAla AmourahJamal SalahAli JameelEmad Az-ZobiMohammad A. TashtoushMuhammad Zaini AhmadTala Sasa
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-08-312025-08-311463633365010.19139/soic-2310-5070-2250Parabolic problem considering diffusion piecewise constant refer to domain using FEM
http://47.88.85.238/index.php/soic/article/view/2490
<p>This paper presents a numerical solution of the one-dimensional heat equation using the Finite Element Method (FEM) with time discretization through the implicit Euler scheme. The formulation considers piecewise constant diffusion coefficients over the spatial domain and employs a weak formulation approach for numerical approximation. The study provides a detailed analysis of the assembly process, including mass, stiffness, and load matrices. Numerical results illustrate the accuracy and stability of the proposed method under different initial conditions and diffusion parameters.</p>Guillermo VillaCarlos Alberto Ramírez VanegasJosé Rodrigo González Granada
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-09-182025-09-181463651366610.19139/soic-2310-5070-2490Modeling the Impacts of Vaccination and other Interventions on Malaria Transmission Dynamics
http://47.88.85.238/index.php/soic/article/view/2582
<p>Malaria remains a persistent global health challenge, with its burden concentrated in Sub-Saharan Africa and other endemic regions where transmission is sustained by interactions between human and mosquito populations. Despite progress in prevention and treatment, the emergence of partial immunity, asymptomatic carriers, and insecticide resistance complicates control efforts. In this study, we formulate and analyze a nonlinear compartmental model that incorporates a vaccination class alongside traditional malaria interventions. The model’s mathematical properties are established by proving the positivity and boundedness of solutions, and by deriving the disease-free and endemic equilibria. Using the Diekmann-Heesterbeek-Metz Next Generation Matrix approach, we obtain the effective reproduction number and conduct rigorous local and global stability analyses of both equilibria. Furthermore, local sensitivity analysis is performed to identify key parameters driving transmission, highlighting the roles of vaccine uptake, waning immunity, mosquito–human contact rate, and vaccine efficacy. Numerical simulations illustrate the epidemiological impact of vaccination, showing that increased vaccine coverage substantially reduces infection prevalence and sustains lower transmission levels. To complement this, we extend the analysis with a cost-effectiveness evaluation of three optimal control strategies combining insecticide-treated nets, diagnostic surveillance, and environmental sanitation. The results show that while single or dual interventions moderately reduce infections, the integrated triple-intervention strategy together with the vaccinated compartment achieves the greatest epidemiological impact while also being the most cost-effective, yielding the lowest ACER and a negative ICER, indicating cost savings. These findings emphasize that vaccination, when combined with other interventions, not only reduces malaria burden but also represents an economically justified approach to sustainable control.</p>Ayodeji Sunday Afolabi Miswanto Miswanto
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-11-022025-11-021463667370510.19139/soic-2310-5070-2582A Novel Dmey Wavelet Charts for Controlling and Monitoring the Average and Variance of Quality Characteristics
http://47.88.85.238/index.php/soic/article/view/2621
<p>Shewhart charts for quality control of the average and variance of quality characteristics and their monitoring can be affected by data noise. This article proposes the creation of novel charts based on wavelet analysis, specifically the Dmey wavelet, to handle data noise. The discrete wavelet transformation of the Dmey wavelet, which divides the data into two halves, is the foundation of the suggested charts. In contrast, the detail coefficients are proportional to the variance of the observations or the differences between the observations. Through them, the D chart is constructed, which corresponds to the Shewhart chart for the variance. The approximation coefficients are proportional to the average of the observations, and through them the A chart is constructed, which corresponds to the Shewhart chart for the average. Both simulated data and actual data about the weights of infants at Valia Hospital in Erbil were utilized to illustrate the effectiveness of the suggested charts and compare them with Shewhart charts. According to the simulation results, the weights of the infants at Valia Hospital were under control, the suggested charts were effective at treating noise, and were more responsive to even small changes in the Shewhart charts’ quality attributes.</p>Talal Abd Al-Razzaq Saead Al-HassoMahmood M TaherTaha Hussein Ali
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-08-022025-08-021463706371710.19139/soic-2310-5070-2621Hybrid Indoor Security System based on Millimetre Wave Radar, RFID, and Face Recognition
http://47.88.85.238/index.php/soic/article/view/2680
<p>The goals of an efficient indoor security system are availability, integrity, confidentiality, and traceability. The objective of this research article is to reduce the crime rate that happens in closed locations, such as libraries and museums, among other significant locations. This work presents an integrated multi-sensor system for real-time people’s detection, tracking, and identification in internal locations including exhibition halls and museums. Combining data from Ultra High Frequency (UHF) RFID, millimeter-wave (mmWave) radar (TI IWR14), and enhanced camera-based computer vision (YOLOv5-Tiny) produces consistent occupancy monitoring and intruder identification even under low-light or blocked environments. To address latency and detection inconsistency, a Kalman Filter-based fusion technique aligns data across modalities, while edge acceleration with TensorRT enables real-time vision analysis. The system includes a MATLAB-based GUI for visual feedback and alarms. Compared to standard monosensor systems, our approach enhances range coverage, detection speed, and environmental durability. Experimental findings demonstrate the framework's accuracy, low false alarm rate, and appropriateness for smart surveillance applications</p>Mohamed Refaat AbdellahAhmad M. NagmAhmed AbdelhafeezMoshira EbrahimMohammed Safy
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-10-232025-10-231463718374010.19139/soic-2310-5070-2680Quantitative and Qualitative Methods for Screening Scientific Grant Projects and Applications
http://47.88.85.238/index.php/soic/article/view/2716
<p>This article aims to investigate different methods of evaluating scientific grant applications and projects, including quantitative and qualitative approaches to their analysis. Regression analysis, Bayesian networks and multi-criteria evaluation were used in the study. Quantitative analyses included statistical methods to compare the performance of different projects to identify patterns and trends affecting the success of research initiatives. The study provided unique insights into how quantitative and qualitative methods can help improve the objectivity of science project evaluation. Specific numerical measures of the methods’ effectiveness were collected and analysed, identifying the key benefits of each approach. Results showed that regression analysis is effective for predicting dependent variables based on linear relationships, Bayesian networks are useful for modelling complex relationships and accounting for a priori knowledge, especially when dealing with incomplete data, and multi-criteria evaluation provides a structured and transparent decision-making process based on multiple criteria.</p>Zhanna IxebayevaZhenis BagisovDina AbulkassovaAkmaral KhamzinaAizhan Iskaliyeva
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-11-182025-11-181463741376010.19139/soic-2310-5070-2716The XBART-Poisson Classification Model for COVID-19 Data Analysis in Egypt
http://47.88.85.238/index.php/soic/article/view/2738
<p>This paper aims to predict daily case, mortality counts, classify high-risk periods and provide interpretable, probabilistic insights into COVID-19 trends in Egypt by using Extreme Bayesian Additive Regression Trees with Poisson likelihood (XBART-Poisson) model to COVID-19 data in Egypt. The model is adapted for the pandemic's count-based data, such as daily cases, mortality counts, and recovery rates, offering a Bayesian probabilistic approach to forecast trends and analyze epidemiological factors. The Poisson likelihood effectively handles the discrete nature of these data points. Performance is benchmarked against traditional classification techniques, revealing XBART-Poisson’s robustness in capturing key trends and providing accurate predictions for COVID-19 progression in Egypt. The study reaches the suggested model which is more accurate than the traditional models such as Logistic Regression, Decision Tree, Random Forest and XGBoost.</p>Hanaa Elgohari
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-11-122025-11-121463761377510.19139/soic-2310-5070-2738Least Squares Spline Estimation Method in Semiparametric Time Series Regression for Predicting Indonesia Composite Index
http://47.88.85.238/index.php/soic/article/view/2704
<p>The Least Squares Spline (LS-Spline) method offers a flexible approach for modeling fluctuating time series data by adaptively positioning knots at points of structural change. This study develops an LS-Spline estimation method for the Semiparametric Time Series Regression (STSR) model, combining an autoregressive structure as the parametric component and multiple nonparametric functions to capture nonlinear effects. The model is applied to predict the Indonesia Composite Index (ICI), a key indicator of sustainable economic growth. In this framework, the ICI at lag-1 is modeled parametrically, while the BI Rate and Inflation are modeled nonparametrically. Four data splitting schemes 6, 12, 18, and 24 months of testing data are used to evaluate forecasting performance over short, medium, and long term horizons. Results show that the LS-Spline STSR model consistently achieves high predictive accuracy, with MAPE and sMAPE below 10\% and MASE below 1. Residual diagnostics using ACF and PACF confirm that the model satisfies the white noise assumption. These findings emphasize the potential of the LS-Spline STSR model as an economic forecasting tool that can support policies related to one of poin Sustainable Development Goals (SDGs), namely sustainable economic growth.</p>Any Tsalasatul FitriyahNur ChamidahToha SaifudinBudi LestariDursun Aydin
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-10-142025-10-141463776380310.19139/soic-2310-5070-2704A Computational Study for Probabilistic Fuzzy Linear Programming Using Machine Learning in the Case of Poisson Distribution
http://47.88.85.238/index.php/soic/article/view/2800
<p>This paper presents a computational study for probabilistic fuzzy linear programming using machine learning. Two opposite probabilistic fuzzy constraints are considered, where the random variable in the two constraints is discrete with a Poisson distribution. The data was generated from a Poisson distribution under different scenarios. Five scenarios are investigated based on either the same mean parameter and different dispersions of the values of the random variable or the same values of the random variable and different mean parameters. While eight cases are derived by considering different combinations of fuzzy probabilities. These many configurations of different scenarios and cases allow us to compare how models perform while varying both the mean parameter and the range of the values through different combinations of fuzzy probabilities. This setup allows for a thorough evaluation of how these changes impact model performance using machine learning models. Nine machine learning models have been considered in this study for evaluating different scenarios and cases in predicting the target decision variables. Since the Poisson distribution is beneficial in fields such as telecommunications, healthcare, logistics, and reliability engineering, where the frequency of arrivals, failures, or demands exhibits Poisson-like behavior but is additionally impacted by ambiguity or incomplete information. Therefore, this study provides a useful tool to the decision-makers to carefully select the combinations of the fuzzy probabilities in the light of the possible values of the Poisson random variable, especially when the associated probabilities are not specified.</p>Maged George Iskander Israa Lewaaelhamd
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-11-082025-11-081463804381510.19139/soic-2310-5070-2800Using Bayesian AR-ESN for climatic time series forecasting
http://47.88.85.238/index.php/soic/article/view/2902
<p>Bayesian ARIMA models will offer a solid approach for analyzing time series data, providing more flexibility than traditional recursive models. They also effectively combine previous knowledge with current data to handle uncertainty. A particular kind of Bayesian ARIMA model with comparable considerations is called a Bayesian AR model. While Bayesian models employ prior information to estimate a wide range of possible parameter values, older methods frequently use maximum likelihood estimation to obtain single values for parameters. In order to effectively handle uncertainty, they also develop a posterior distribution. The applicability of Bayesian techniques to AR(p) models is examined in this work. It demonstrates their capacity to manage noisy, non-stationary, or incomplete data while allowing for thorough probabilistic inference, which improves uncertainty comprehension and validates probabilistic forecasts. The Bayesian AR model states that present values are linearly dependent on past values, which are further amplified by white noise. We use previous distributions to evaluate the variance and establish the model parameters. Consequently, these values are adjusted in response to observations, resulting in more complex and adaptable dimensional distributions. The Bayesian ARIMA model aids in forecasting and drawing conclusions when time series are more complicated and need variance considerations. Bayesian AR(p) models display the temporal correlations between data points regardless of how stationary they are. These models are commonly estimated using Markov Monte Carlo (MCMC) techniques like Metropolis-Hastings and Gibbs sampling. These models perform well when handling asymmetry, incomplete data, and structural changes. Even when used in a Bayesian manner, traditional models struggle to capture uncertain time series or intricate nonlinear patterns. These contemporary issues can be resolved with the appropriate use of an Echo State Network (ESN). An effective recursive neural network for forecasting evolving time series is the ESN. To identify the most effective inputs for the ESN, the hybrid Bayesian ARESN methodology utilizes the optimal configuration of the Bayesian AR model. The capacity of this approach to accurately simulate nonlinear interactions is recognized. A Bayesian AR model and an ESN model were integrated in this hybrid Bayesian AR-ESN methodology study. The results show that combining Bayesian AR and ESN significantly increases forecasting accuracy, particularly when forecasting error metrics are used. When compared to conventional techniques, the Bayesian model significantly increases predictive accuracy.</p>Shahla Tahseen HasanOsamah Basheer Shukur
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-10-142025-10-141463816383210.19139/soic-2310-5070-2902Advancing Structural Health Monitoring with Lightweight Real-Time Deep Learning-Based Corrosion Detection
http://47.88.85.238/index.php/soic/article/view/2919
<p>Structural Health Monitoring (SHM) is essential for preserving the safety and service life of industrial infrastructure. Corrosion, in particular, remains one of the most critical degradation phenomena, demanding timely and accurate detection to prevent structural failures and costly downtime. This study proposes a lightweight, real-time corrosion detection framework tailored for SHM applications. The framework integrates design elements inspired by the latest YOLOv11 and YOLOv12 architectures while incorporating task-specific optimizations for detecting small, irregular corrosion patterns under diverse environmental conditions. Two curated datasets, augmented with domain-specific transformations, are used to enhance model robustness and generalization. Comprehensive benchmarking against previous YOLO versions (YOLOv3, YOLOv5, YOLOv7, YOLOv8) demonstrates that our optimized YOLOv11m configuration achieves up to 7.7\% improvement in mAP@50 and 12.1\% in mAP@50–95 over YOLOv8m, while the YOLOv12s variant offers a competitive accuracy–speed trade-off. These findings highlight the potential of the proposed approach for deployment in edge-based SHM systems for real-time industrial monitoring.</p>Safa AbidMohamed AmrouneIssam BendibChams Eddine Fathoun
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-10-152025-10-151463833385610.19139/soic-2310-5070-2919Analysis of Queueing-Inventory System that Delivers an Item for Pre-Booked Orders
http://47.88.85.238/index.php/soic/article/view/2947
<p>This paper considers the delivers an item for pre-booked orders in queueing-inventory system. This system considers the maximum capacity of stock to be S units with two waiting platforms and two dedicated servers. The demands arrive according to the Poisson process and enter into the plafform 1 (PL 1) of size M, including one at the service point. The server 1 (SR 1) is used to pick up orders from the client in PL 1, and the assumption is that orders will be picked even if the stock level is zero. Following order selection, the client joins the platform 2 (PL 2), it has a virtual waiting area of size N.<br>Subsequently, server 2 (SR 2) fabrication the selected orders one by one and distributes them to the PL 2 client. Both service durations are distributed exponentially. The arriving clients are lost if the PL 1 is full. An external supplier replenishes the stock, which is carried out according to the (s, Q) reordering policy. The exponential distribution determines the lead time for reordering. Several numerical results for different parameters are given to clarify the system’s key performance indicators. In addition, an investigation is conducted into the required numerical interpretations to improve the suggested model.</p>Lawrence KAnbazhagan NAmutha SGyanendra Prasad JoshiWoong Cho
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-09-262025-09-261463857387310.19139/soic-2310-5070-2947A Time-Delayed Mathematical Modeling for Monkeypox Transmission with Incubation Period
http://47.88.85.238/index.php/soic/article/view/2875
<p>This paper proposes a time-delayed mathematical model designed to analyze the transmission dynamics of the monkeypox virus, explicitly incorporating the incubation period as a time delay. The disease-free and endemic equilibria of the time-delayed model are analyzed. The basic reproduction number determines the time delay caused by the incubation period. The disease-free equilibrium is locally asymptotically stable when the threshold is less than unity. The model parameters is then estimated using the least-squares fitting method based on monkeypox cases in the United States of America. Numerical simulations are performed with varying time-delay values, representing different lengths of the incubation period. The results reveal that a longer incubation period leads to slower spread of the disease. In other words, the longer the incubation period, the more gradual the increase in the number of infected individuals over time. The observed relationship between incubation period delays and disease spread rate highlights the crucial role of this delay factor in shaping the transmission patterns of monkeypox virus. These insights can inform disease control strategies, particularly those aimed at early detection and isolation during the incubation period.</p>Muhammad Akbar HidayatFatmawatiCicik AlfiniyahOlumuyiwa J. Peter
Copyright (c) 2025 Statistics, Optimization & Information Computing
2025-11-152025-11-151463874389010.19139/soic-2310-5070-2875