http://47.88.85.238/index.php/soic/issue/feedStatistics, Optimization & Information Computing2025-09-28T03:14:08+08:00David G. Yudavid.iapress@gmail.comOpen Journal Systems<p><em><strong>Statistics, Optimization and Information Computing</strong></em> (SOIC) is an international refereed journal dedicated to the latest advancement of statistics, optimization and applications in information sciences. Topics of interest are (but not limited to): </p> <p>Statistical theory and applications</p> <ul> <li class="show">Statistical computing, Simulation and Monte Carlo methods, Bootstrap, Resampling methods, Spatial Statistics, Survival Analysis, Nonparametric and semiparametric methods, Asymptotics, Bayesian inference and Bayesian optimization</li> <li class="show">Stochastic processes, Probability, Statistics and applications</li> <li class="show">Statistical methods and modeling in life sciences including biomedical sciences, environmental sciences and agriculture</li> <li class="show">Decision Theory, Time series analysis, High-dimensional multivariate integrals, statistical analysis in market, business, finance, insurance, economic and social science, etc</li> </ul> <p> Optimization methods and applications</p> <ul> <li class="show">Linear and nonlinear optimization</li> <li class="show">Stochastic optimization, Statistical optimization and Markov-chain etc.</li> <li class="show">Game theory, Network optimization and combinatorial optimization</li> <li class="show">Variational analysis, Convex optimization and nonsmooth optimization</li> <li class="show">Global optimization and semidefinite programming </li> <li class="show">Complementarity problems and variational inequalities</li> <li class="show"><span lang="EN-US">Optimal control: theory and applications</span></li> <li class="show">Operations research, Optimization and applications in management science and engineering</li> </ul> <p>Information computing and machine intelligence</p> <ul> <li class="show">Machine learning, Statistical learning, Deep learning</li> <li class="show">Artificial intelligence, Intelligence computation, Intelligent control and optimization</li> <li class="show">Data mining, Data analysis, Cluster computing, Classification</li> <li class="show">Pattern recognition, Computer vision</li> <li class="show">Compressive sensing and sparse reconstruction</li> <li class="show">Signal and image processing, Medical imaging and analysis, Inverse problem and imaging sciences</li> <li class="show">Genetic algorithm, Natural language processing, Expert systems, Robotics, Information retrieval and computing</li> <li class="show">Numerical analysis and algorithms with applications in computer science and engineering</li> </ul>http://47.88.85.238/index.php/soic/article/view/968A New Robust Estimation and Hypothesis Testing for Reinsurance Premiums in Big Data Settings2025-09-28T03:12:48+08:00Touil Salaha.rassoul@ensh.dzRASSOUL Abdelaziza.rassoul@ensh.dzOuld Rouis Hamidhouldrouis@hotmail.comFrihi Redouanerfrihi@yahoo.fr<p>This research study presents a novel methodology to estimate premiums for reassurance in the setting of large datasets, employing the principle of grouping. We present a median-of-means non-parametric estimator that addresses the difficulties posed by huge datasets. We analyze this estimator's consistency and asymptotic normality under specific criteria about the growth rate of subgroups.</p> <p>Furthermore, we introduce a novel approach to the empirical likelihood method for the median to evaluate excess-of-loss reinsurance. Our proposed method eliminates the need to estimate the estimator's variance structure in advance, which can be difficult and prone to inaccuracies. Numerical simulation analysis is implemented to evaluate the efficacy of our proposed estimator. The results indicate that our estimator is highly resilient in the presence of outliers.</p>2025-09-19T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/1349Properties of the Leimkuhler curve with its application in JCR2025-09-28T03:14:08+08:00V. Asghariasgharivh@yahoo.comG. R. Mohtashami Borzadarangrmohtashami@um.ac.irH. Jabbarijabbarinh@um.ac.ir<p>One of the most noticeable ways of illustrating the degree of concentration in a theoretical or empirical frequency distribution is via the Leimkuhler curve. Leimkuhler curve is particularly appropriate in the field of informetrics where the variable of interest is the number of citations, relevant references, borrowing of a monograph, etc. In informetrics, interest usually focuses on the most productive sources, and the equivalent graphical representation is via the Leimkuhler curve. In this paper, some statistical properties of the Leimkuhler curve, a plot of the cumulative proportions of total productivity against the cumulative proportions of sources, where the sources are ordered non-increasingly concerning their productivity levels are discussed.<br>Also, some aspects of the Leimkuhler curve and its connection with other criteria are derived. Finally, several concentration measures are obtained using the data of the impact factors in eight scientific fields.</p>2025-09-22T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/1648Some Properties of Total Time on Test and Excess Wealth in Bivariate Cases2025-09-26T16:13:32+08:00Mojtaba Esfahanim.esfahani@mail.um.ac.irMohammad Aminim-amini@um.ac.irGholam Reza Mohtashami Borzadarangrmohtashami@um.ac.ir<p>Most of the introduced transformations have many applications in reliability. For example, the total time on test (TTT) and excess wealth (EW) transforms are useful concepts in various fields. This paper presents bivariate TTT and EW transforms. Also, the bivariate location independent riskier (LIR) transform has been considered. In addition, we present the conditions for establishing the TTT transform ordering in the bivariate mode and its relationship with EW order and some stochastic orders. Also, we establish that the bivariate TTT transform order as well as the presentation of the new better than used in bivariate TTT transform class. Finally, we describe the relationship between TTT and EW transforms with aging classes in the bivariate mode.</p>2025-08-11T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2241 Analyzing and Classifying Coronary Artery Disease Severity Using Statistical Methods and Machine Learning Techniques2025-09-26T16:13:33+08:00Meriem Bounekdjakhaoula.koukou50@yahoo.comSoumia Kharfouchis_kharfouchi@yahoo.frAbdennour Boulesnaneabdennour.boulesnen@univ-constantine3.dz<p>Background:Metabolic Syndrome (MS) is a cluster of risk factors, including large waist size (LWS), high blood pressure (HBP), high cholesterol levels (HDL), high blood glucose (HBG), glycemic index (GI), and hypertriglyceridemia (HTG), which collectively increase the risk of developing cardiovascular diseases such as Coronary Artery Disease (CAD). Understanding the relationship between MS and CAD severity is crucial for developing targeted prevention and treatment strategies.<br>Methods:This study conducted an etiological and descriptive analysis to characterize the profiles of CAD patients with MS using various statistical methods. These methods included correlation analysis and odds ratio calculations to evaluate the significance of MS components. Multiple machine learning (ML) models, including Multilayer Perceptron (MLP), Decision Tree (DT), Logistic Regression (LR), AdaBoost (ABT), K-Nearest Neighbors (KNN), Support Vector Machine (SVM), and an ensemble Voting Classifier (VC), eXtreme Gradient Boosting (XGBoost) and Light Gradient Boosting Machine (LightGBM) were employed to classify CAD severity and identify modifiable risk factors specific to various MS combinations. The effectiveness of these models was evaluated and compared.<br>Results: The analysis identified HDL, HBG, and LWS as significant aggravating factors for CAD, while HTG appeared to be protective. The XGBoost model demonstrated superior predictive accuracy, achieving an accuracy of 83.12\% in predicting CAD severity, compared to other ML models. The inclusion of MS features significantly enhanced the performance of all ML models. <br>Conclusions: The findings underscore the importance of incorporating comprehensive clinical features in predictive models for CAD. The study suggests that targeted prevention strategies and personalized treatment plans should consider the specific MS components influencing CAD severity. Future research should focus on validating these findings in larger, diverse populations and further integrating additional clinical and genetic data to refine predictive models.</p>2025-07-28T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2408A comparison of Frequentist and Bayesian network meta analysis with a case study from dental health data2025-09-26T16:13:34+08:00Amritendu Bhattacharyaamri10du@gmail.comBoya Venkatesuvenkatesu.boya@woxsen.edu.inRavilisetty Revathirevathiravilisetty@gmail.com<p>Network meta analysis is an extension of pair wise meta analysis where both direct as well as indirect treatment effects relative to a chosen reference treatment arm can be obtained from combined pool of studies as long as all the treatment arms are connected directly or indirectly via a network based on individual trials or studies. The aim of this network meta-analysis is to summarize and compare the results from Frequentist and Bayesian methods of the direct and indirect clinical evidence on the effectiveness of professionally applied topical fluorides in preventing dental root caries. Both the statistical approaches use different frameworks and each provides unique insights into the network of treatments approved for dental root caries. While the Frequentist approach provides the relative treatment effects along with corresponding confidence intervals compared to the usual care group, the Bayesian approach provides the relative treatment effects along with corresponding credible intervals. The full posterior distribution of treatment effects can be obtained using the Bayesian framework. Both approaches show similar direction of outcomes with subtle differences. A comparative analysis is presented and discussed using a case study using few topical fluoride-based treatment for dental caries. Various aspects of differences in approach as well as diagnostic checks and treatment ranking methods in both the frameworks are described. The netmeta package of R is used for the Frequentist approach while the geMTC package of R is used for Bayesian Analysis.</p>2025-08-15T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2456Utilizing Multi-Arm Bandit and Partitioning Around Medoids for Clustering Food Security Conditions in Sumatra Island2025-09-26T16:13:35+08:00Muhammad Subiantosubianto@usk.ac.idNaila Anastasya Anshoranastasyaansh@gmail.comEvi Ramadhanievi.ramadhani@usk.ac.idNany Salwanany.salwa@usk.ac.id<p><em>This study applies the Multi Arm-Bandit (MAB) and Partition Around Medoid (PAM) methods to cluster food security status in Sumatra Island in 2022. Food issues in Indonesia are becoming increasingly complex with challenges covering food availability, accessibility and food security. Data were obtained from the Food Security and Vulnerability Atlas (FSVA). The MAB method was used to identify the most influential variables in decision-making, while the PAM method was used for clustering based on medoids. The results showed that the PAM method was effective in categorizing provinces of Sumatra Island based on food security variables. Additionally, significant variables identified included poverty, food expenditure, disease rate, and stunting. This study provides important contributions to the government and the National Food Agency in designing food security improvement programs in various regions of Sumatra Island and serves as a reference for other researchers applying similar methods in complex data analysis.</em></p>2025-08-05T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2479The Notion of Radical Ideals in MV-Algebras and Product MV-Algebras2025-09-26T16:13:36+08:00Alejandro Diaz Llanoalejodiaz-527@utp.edu.coVictor Hugo Ramírez-Ramírezpechentico@utp.edu.coJose Rodrigo Gonzalezjorodryy@utp.edu.co<p>In this work, we describe an interpretation of the radical of an ideal in the context of lu-groups, semi-low lu-rings and PMVf -algebras. These notions result naturally from a known notion in the context of MV-algebras and the use of categorical equivalences. The resulting notions in these contexts allow us to propose and prove a version of Hilbert’s Nullstellensatz Theorem.</p>2025-08-12T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2486Artificial Intelligence and Machine Learning Models for Credit Risk Prediction in Morocco2025-09-26T16:13:37+08:00Asmaa Farisasmaa-faris-etu@etu.univh2c.maMostafa Elhachloufielhachloufi@yahoo.fr<p>This study investigates the application of artificial intelligence and machine learning models for credit risk prediction using a real-world dataset collected from a Moroccan credit institution. The data reflect clients' demographic, socio-economic, and financial characteristics, as well as behavioral information related to credit history and interactions with the institution. Six supervised learning models Logistic Regression, Random Forest, Support Vector Machine, Decision Tree, k-Nearest Neighbors, and Naïve Bayes were trained and evaluated using key performance metrics such as accuracy, recall, F1-score, AUC, and average precision. Results indicate that Random Forest outperformed all other models, demonstrating strong discriminative power and robustness to class imbalance, while Logistic Regression provided consistent and interpretable baseline performance. These findings highlight the effectiveness of ensemble and margin-based methods in credit scoring applications and emphasize the importance of feature importance analysis for transparent and informed decision-making in financial risk assessment.</p>2025-08-15T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2493A Characterization of a Subclass of Separate Ratio-Type Copulas2025-09-26T16:13:38+08:00Ziad Adwanziad.adwan@lc.ac.aeNicola Sottocornolans6159@nyu.edu<p>Copulas are essential tools in statistics and probability theory, enabling the study of the dependence structure between random variables independently of their marginal distributions. Among the various types of copulas, Ratio-Type Copulas have gained significant attention due to their flexibility in modeling joint distributions. This paper focuses on Separate Ratio-Type Copulas, where the dependence function is a separatep roduct of univariate functions. We revisit a theorem characterizing the validity of these copulas under certain as sumptions, generalize it to broader settings, and examine the conditions for reversing the theorem in the case of concave generating functions. To address its limitations, we propose new assumptions that ensure the validity of separate copulas under specific conditions. These results refine the theoretical framework for separate copulas, extending their applicability to pure mathematics and applied fields such as finance, risk management, and machine learning.</p>2025-08-12T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2619Survival Modelling of Breast and Brain Cancer Using Statistical Maximum Likelihood and SVM Techniques2025-09-26T16:13:39+08:00Nidhal Saadoonnidhal.q.saadoon@aliraqia.edu.iqHiba Salmanhiba1985ali@gmail.comAdel sufyanadel.sufyan@dpu.edu.krdEmad Az- Zo’bieaaz2009@mutah.edu.joMohammad Tashtoushtashtoushzz@su.edu.om<p>The research focuses on two main objectives that examine the Burr Type XII distribution through MLE parameter estimation and a comparison between MLE and SVM methods. Survival-related functions such as the survival function and hazard rate and other derived reliability measures are estimated by executing both methods on breast and brain cancer patient real-world data. The input layer of the proposed SVM framework contains distribution parameter specifications that produce output estimates for the reliability function and hazard rate function as well as probability density function, reversed hazard rate function, Mills ratio, and odds function. The research data shows how the hazard function grows after diagnosis then declines toward the end of the study period which reflects the theoretical behaviour patterns of Burr Type XII distributions. The survival analysis demonstrates that theoretical characteristics of the Burr Type XII distribution match experimental results thus validating its usage as cancer survival data model. This SVM method shows itself to be an accurate and stable approach for critical survival parameter prediction.</p>2025-06-23T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2648Variable selection in beta regression model using firefly algorithm2025-09-26T16:13:40+08:00Zahraa Mohammed Taherzahraa.mohammed@uoninevah.edu.iq<p>The Beta regression model presents widespread scientific interest when used for modeling both proportions and rates data. Creating a predictive regression model requires the identification of select important variables from abundant available options. This work introduces the use of a firefly algorithm for selecting variables when applying the beta regression model featuring varying dispersion parameters. Evaluation of the proposed method's performance takes place through simulations and real data implementation. The proposed method demonstrates better performance than corrected Akaike information criterion, corrected Schwarz information criterion, and corrected Hannan and Quinn criterion in results analysis. The proposed method functions effectively to select variables in beta regression models which have varying dispersion levels.</p>2025-07-22T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2656Enhancing Parameter Estimation for Fuzzy Robust Regression in the Presence of Outliers2025-09-26T16:13:40+08:00Vaman M. Salihvamanmuhammed8@gmail.comShelan Ismaeelshelan.ismaeel@uoz.edu.krd<p>This study presents an enhanced algorithm for parameter estimation in fuzzy robust regression (FRR), aimed at improving the reliability of estimates in the presence of outliers. The standard approach of using ordinary least squares (OLS) struggles when dealing with both outlier effects and the uncertainty inherent in data. By combining traditional FRR analysis with the Huber loss function, this research addresses these challenges effectively. The performance of the algorithm is evaluated using real-world datasets and a simulation study, demonstrating its ability to minimize the impact of outliers. Furthermore, the algorithm not only outperforms OLS but also serves as a robust alternative to traditional methods, including Huber, Hampel, Tukey, Andrews, MM-estimates and existing FRR approaches.</p>2025-07-30T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2658Enhanced-Efficiency Randomized Response Model: A Simplified Framework2025-09-26T16:13:41+08:00Ahmad M. Aboalkhaira.m.aboalkhair@mans.edu.egEl-Emam El-Hosseinyeaalhabashy@imamu.edu.saMohammad A. Zayedmaazayed@imamu.edu.saTamer Elbayoumitmelbayoumi@ncat.eduMohamed Ibrahimmohamed_ibrahim@du.edu.egA. M. Elshehaweya-elshehawey@du.edu.eg<p>The key challenge in advancing randomized response techniques lies in enhancing efficiency while ensuring simplicity and ease of implementation. This research presents a fresh randomized response framework that achieves comparable effectiveness with fewer randomization devices, simplifying its real-world deployment. Compared to Aboalkhair’s (2025) model that depend on two randomization tools, the suggested method delivers equivalent efficiency with only a single tool. The study evaluates the new model’s superiority over existing approaches and establishes a measure of privacy protection. Through theoretical analysis and numerical comparisons, the results demonstrate distinct efficiency benefit of the suggested model.</p>2025-07-13T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2837Majority Voting Paraconsistent Annotated Logic Algorithm For Fault Tolerance And Detection In Wireless Sensor Networks 2025-09-26T16:13:42+08:00Abdullah Shawan Alotaibia.shawan@su.edu.sa<p>Sensors in Wireless Sensor Networks (WSNs) are prone to faults. Since sensor nodes often share parts of their information, it becomes possible to detect inconsistencies between a neighbour's report and what is expected. In this work, such inconsistencies are analysed using the Paraconsistent Annotated Logic with Two Values (PAL2V) to compare the actual decisions of neighbouring nodes with the expected decisions reconstructed from the local, partial information each node possesses. An algorithm is proposed based on majority voting (MV-PAL2V), concluding a set of states describing the contradiction and guiding a specific corrective response to reduce the associated errors. Simulation results demonstrate that using PAL2V enhances the accuracy of majority voting, particularly by reducing false alarm rates and improving the detection of actual events. Moreover, the statistical and interaction analysis of the simulation results emphasised that the number of nodes plays a crucial role in the reliability of the WSNs’ data.</p>2025-08-13T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2481Optimal Competitive Strategies on the Performance of two Insurance Companies2025-09-26T16:13:43+08:00Abouzar Bazyariab_bazyari@yahoo.com<p>In this paper, the optimization problems of the terminal wealth of two dependent insurance companies which each of them tries to perform better relative to its competitor is presented. It is assumed that both insurers having the compound Poisson process and they are allowed to purchase proportional reinsurance with a constant reinsurance premium and invest in a financial market which consists of a risk-free asset, a defaultable coupon bond whose the price process of each insurer is governed by a standard Brownian motion and dynamics of defaultable price process is modeled as a mixture of the exponential stochastic differential equation of corporate coupon bonds. For the correlated competing insurance companies, by applying the Girsanov’s theorem and compensated Poisson process, we formulate the wealth process of each insurer based on the reinsurance and investment strategies. By solving the nonlinear Hamilton-Jacobi-Bellman equations related to our optimal control problems with exponential utility functions, the optimal investment and reinsurance strategies are derived for both insurers among all admissible policies. Finally, the influence of each model parameters on the optimal portfolio strategies are discussed by numerical experiments. </p>2025-07-26T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2579Hybrid ABC-JAYA Algorithm for Optimizing Resource Allocation in NOMA-Based Downlink Systems2025-09-26T16:13:44+08:00Karima AIT BOUSLAMk.aitbouslam.ced@uca.ac.maJamal Amadidj.amadid1297@edu.uca.maRadouane Iqdouriqdour@gmail.comAbdelouhab ZeroualZeroual@uca.ac.ma<p><span class="fontstyle0">Achieving significant spectral efficiency and enabling massive connectivity are paramount for wireless communication systems in the fifth generation (5G) and beyond. Non-Orthogonal Multiple Access is currently an efficient multiple access method to achieve these objectives. NOMA provides a number of advantages, including enhanced sum rates, improved user fairness, and increased spectral efficiency. This is mainly due to allowing several users to share common resources simultaneously, as a result, the conventional orthogonal multiple access method’s orthogonality is disrupted. Instead, resource allocation remains the main issue in NOMA due to considering the coupling between power allocation and user pairing. In this article, we propose a methodical approach that involves refining the user pairing strategy and power allocation while adhering to constraints on power allocation and enhancing spectral efficiency. Specifically, our approach utilizes the most considerable user distance for the user grouping strategy. For power allocation, we propose a hybrid algorithm that combines the JAYA and Artificial Bee Colony (ABC) algorithms. Results of simulation indicate that our suggested approach outperforms conventional approaches, such as fractional and fixed power allocation, by enhancing spectral efficiency by at least </span><span class="fontstyle2">50</span><span class="fontstyle3">bits/s/Hz </span><span class="fontstyle0">and improving the bit-error rate performance. Furthermore, the research explores the impact of different modulation schemes on the proposed strategy.</span></p>2025-08-06T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2643On RIDS Analysis for Shade Tree Placement and Its Application to STGNN Multi-step Forecasting on RH and CO2 Concentration of Coffee Agroforestry2025-09-26T16:13:45+08:00Zainur Rasyid Ridlozainur.fkip@unej.ac.idDafikd.dafik@unej.ac.idJoko Waluyojokowaluyo.fkip@unej.ac.idYushardiyus_agk.fkip@unej.ac.idM. Venkatachalamvenkatmaths@gmail.com<p>Let G(V,E) be a finite, simple, and connected graph, where |V | and |E| denote the number of vertices and edges, respectively. A subset D ⊆ V is called a dominating set if every vertex in V \ D is adjacent to at least one vertex in D. If no two vertices in D are adjacent, then D is referred to as an independent set. The independent domination number of G, denoted by γi(G), is the minimum size of an independent dominating set. For a given vertex v ∈ V , its metric representation with respect to an ordered set W = {w1,w2, . . . ,wk} is defined as the k-vector r(v|W){d(v|w1), d(v|w2), d(v|w3), . . . , d(v|wk)}, where d(v,w) is the shortest path distance between vertices v and w. A set W is called a resolving independent dominating set (RIDS) if it is an independent dominating set and every pair of distinct vertices in G has a unique metric representation relative toW. The smallest cardinality of such a set is known as the resolving<br>independent domination number, denoted by γri(G). In this paper, we will obtain the lower and upper bound of γri(G) and determine the exact of value of the resolving independent domination number of some graph classes. Furthermore, to see the robust application of resolving independent domination, at the end of this paper we will illustrate the implementation of it on analyzing Spatial Temporal Graph Neural Network (STGNN) model for multi-step forecasting on relative humidity (RH) and carbon dioxide (CO2) concentration of coffee agroforestry.</p>2025-09-18T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2653Feature selection using binary Harris Hawks optimization algorithm to improve K-Means clustering2025-09-26T16:13:46+08:00Shaymaa haleem ibrahemshayma.haleem@uomosul.edu.iqAmmar Saad Abduljabbarammarsaad86@uomosul.edu.iqOmar Saber Qasimomar.saber@uomosul.edu.iq<p>This study aims to explore the suitability of adopting k-means clustering for categorization of five disaggregate data sets that undergo feature selection employing a binary Harris Hawks optimization algorithm. First, a feature selection technique is used by BHHOA to classify the most important features from each dataset prior to executing the reduction of data dimensionality and the improvement of the quality of the data collected. Then in the next step, the k-means clustering algorithm is used on the fine-tuned data sets to form a sensible number of clusters. The evaluation of k-means clustering considered the effectiveness of the clustering algorithm by its accuracy and feature set. Comparing the results with those obtained by using the same set of features but by other methods, it is clear that BHHOA enhances feature selection enhances clustering, and this confirms its capability to handle large datasets with high dimensionality. The outcomes show that using the proposed approach, consisting of BHHOA for feature selection followed by k-means clustering, could significantly improve the classification performance of the datasets.</p>2025-08-01T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2269In-Depth Exploration of Industry-Level Deep Learning Model for Brain Anomaly Detection2025-09-26T16:13:47+08:00Md. Eyamin Mollaeyamin.bubt@gmail.comMd. Anwar Hussen Wadudmahwadud@sstu.ac.bdMd Rakibur Rahman Zihadrr4kib@gmail.comT.M. Amir Ul Haque Bhuiyanamir@bubt.edu.bdMd. Mahbub-Or-Rashidmahbub@bubt.edu.bdAli Azgarazgar@bubt.edu.bdMd. Saddam Hossainmhossain@bubt.edu.bdJahirul Islam Babarjibabar@bubt.edu.bd<p>Finding abnormalities in the brain is essential to identifying neurological conditions and developing patient-specific treatment plans. In-depth research on the performance and deployment of a cutting-edge deep learning model for brain anomaly detection in practical applications is the aim of the project Unboxing an Industry-Level Deep Learning Model for Brain Anomaly Detection. The paper addresses the challenges and barriers of developing an industry-level model while taking technological, ethical, and other aspects into account. One of the project objectives is to assess how well deep learning models perform by employing an expansive dataset of the brain looks from different neuroimaging modalities through careful testing and approval over an extent of patient demographics and clinical scenarios the points to assess the model affectability exactness and generalizability they think about points to supply moral benchmarks for securing understanding protection and expanding belief in therapeutic strategies this investigation is critical since it has the potential to altogether alter the way that therapeutic diagnostics and brain anomalies are performed this endeavor points to revolutionizing healthcare by utilizing an industrial-scale profound learning show to encourage early determination and custom-fitted treatment for individuals with neurological disarranges. Clarifying the innovative challenges related to actualizing an industrial-scale profound learning demonstration for the determination of brain anomalies in genuine healthcare settings is the point of the venture. The ponder addresses issues with information planning, demonstrates preparation, and approval in arrange to coordinate into the current healthcare frameworks.</p>2025-08-01T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2335Numerical Solutions of Multi-Dimensional Fractional Telegraph Equations2025-09-26T16:13:49+08:00NASSER Rhaif SwainNaserrhaif809@utq.edu.iqHassan Kamil jassimhassankamil@utq.edu.iq<p>This study employs the *Young Variational Iteration Method (YVIM)* to analyze the analytical solutions of the spatio-temporal Telegraph equation (ST-TE). *YVIM* is an innovative and attractive hybrid integral transformation strategy that elegantly combines *VIM* and *Young* transformation methods. This solution strategy effectively and rapidly generates convergent series-type solutions through an iterative process that requires fewer computations. The method’s validity is demonstrated by applying it to two test cases (ST-TE) within the framework of *Tanya’s derivative*, which includes the definition of non-singular kernel functions. The study includes numerous comparisons between the approximate solutions, exact solutions, and those available in the relevant literature to verify the accuracy and effectiveness of the technique. Graphical representations are provided to illustrate the impact of incorrect, temporal, and spatial parameters on the behavior of the obtained solutions. The results suggest that the method is straightforward to implement and can be used to explore complex physical systems governed by nonlinear partial differential equations with fractional time components.</p>2025-04-15T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2448Study on microscopic numerical simulations of an SIQR model considering networks2025-09-26T16:13:49+08:00Thabang Mapharingmaphathabang@gmail.comMokaedi Lekgarilekgarim@biust.ac.bw<p>In this paper, we describe the SIQR epidemic model with mathematical differential equations and then propose four generalized systems of differential equations to model the spread of epidemic diseases focusing on networks. We give a simple solution of the SIQR model, the basic reproduction number will be performed and the theoretical results validated using numerical simulations. Finally, we make microscopic simulations for the differential equations of the epidemic model while analysing the difference in the spread of the infection on a scale-free network and a random network</p>2025-08-02T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2484Adaptive clustering using enhanced DBSCAN: a dynamic approach to optimizing density-based clustering2025-09-26T16:13:50+08:00Mayas Aljibawimayas.mohammed@uomus.edu.iqHayder Kareem Algabrihayder_kareem_algabri@hilla-unc.edu.iq Zaid Ibrahim Rasoolzaid.ibrahim.rasool@uobabylon.edu.iq<p class="Abstract" style="margin-right: 28.1pt; text-align: justify;"><span lang="EN-GB">Clustering is a critical unsupervised learning technique for identifying patterns and structures in data. Traditional algorithms, such as density-based clustering non-parametric algorithm (DBSCAN), struggle with datasets characterized by varying densities, overlapping features, and noise, leading to suboptimal clustering quality. To address these limitations, this study introduces an Enhanced Adaptive DBSCAN (ADBSCAN) algorithm that dynamically adjusts the epsilon parameter and leverages silhouette score validation to improve cluster quality. The algorithm was tested on three benchmark datasets representing varying complexities. Findings showed that Enhanced ADBSCAN could find significant clusters, especially in datasets with modest feature overlap. However, datasets with substantial overlap and high-density variations presented difficulties. The findings demonstrate how important parameter selection is and how adaptive techniques that dynamically modify these parameters in response to data properties can greatly improve clustering performance on a variety of data sets. Future studies should concentrate on enhancing adaption mechanisms to better manage overlapping features and varying data density, enhancing the algorithm's resilience and practicality.</span></p>2025-07-22T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2489Numerical methods for evolutionary problems in partial differential equations and control2025-09-26T16:13:51+08:00Guillermo Villa Martínez gvilla@utp.edu.coCarlos Alberto Ramírez Vanegascaramirez@utp.edu.coOscar Danilo Montoya Giraldoodmontoyag@udistrital.edu.co<p>In this paper we implement the finite element method for parabolic problems with dominant transport terms. The formulation of the linear system of equations coming from a partial differential equation takes its form from the weakening of the problem. Several numerical experiments are performed to know the convergence of the solution and an error analysis in a Sobolev space for linear functions is implemented.</p>2025-06-23T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2501Bifurcation analysis of dengue hemorrhagic fever model with logistic growth rate in aquatic stage2025-09-26T16:13:52+08:00Herri Sulaimanherrimsc@gmail.comFatmawati Fatmawatifatmawati@fst.unair.ac.idCicik Alfiniyahcicik-a@fst.unair.ac.idKumama Regassa Chenekekumamaregassa@gmail.com<p>This work analyzes the transmission dynamics of dengue hemorrhagic fever (DHF) by incorporating the aquatic<br>phase and logistic growth rate for mosquitoes population. The model accounts for human-mosquito interactions and explores<br>the role of disease-induced mortality. We investigate the existence and stability of equilibria, particularly focusing on the<br>phenomenon of backward bifurcation. Our analysis demonstrates that the disease-free equilibrium is globally asymptotically<br>stable when the basic reproduction number less than unity in the absence of disease-induced mortality. However, when<br>disease-induced mortality is considered, backward bifurcation emerges, leading to the coexistence of multiple equilibria in<br>the range basic reproduction number between critical reproduction number and one. A Lyapunov function approach confirms<br>the global stability of the endemic equilibrium when basic reproduction number more than unity. Furthermore, we show that<br>neglecting disease-induced mortality eliminates backward bifurcation, ensuring a unique endemic equilibrium. Numerical<br>simulations support our theoretical findings, illustrating different stability behaviors under varying initial conditions.</p>2025-07-17T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2507Optimizing Lemongrass Disease Detection: A Comparative Analysis of Neural Architecture Search (NAS) with convolutional neural network (CNN) and Transfer Learning models 2025-09-26T16:13:53+08:00Putra Sumariputras@usm.myAhmed Abed Mohammeda.alsherbe@qu.edu.iqMustafa M. Abd Zaidmustafamajeed2014@gmail.comShuchuan Tiantianshuchuan@student.usm.myWenjing Wuwuwenjing@student.usm.myTingting Zhangzhangtingting@student.usm.myXiaolin Fufuxiaolin@student.usm.myHaolu Dongdonghaolu@student.usm.myYangyang Weiweiyangyang@student.usm.my<p style="font-weight: 400;">Lemongrass is an economically important crop that faces significant disease challenges, making timely and accurate detection crucial for enhancing crop yield and quality. Traditional detection methods rely on manual observation, which can be subjective and inefficient. This study uses convolutional neural network (CNN) models to investigate automated detection methods for lemongrass diseases. We mainly utilize Neural Architecture Search (NAS) methods involving Efficient Neural Architecture Search (ENAS) and Partial Channel-Differentiable Architecture Search (PC-DARTS) to enhance structural models. Then, we compare the execution of these NAS-optimized models with transfer learning models such as VGG16, AlexNet, and Inception. We applied data augmentation and preprocessing methods to improve the model's performance. PC-DARTS achieved 92.19% accuracy with a validation set, with a significant reduction in computational resources, while the accuracy of ENAS was 83.33%. This paper explains the ability of NAS to detect lemongrass disease in real time, comparing it with other traditional approaches.</p> <p style="font-weight: 400;"> </p>2025-07-29T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2564Nonstandard Finite Difference Schemes for Solving Systems of two Linear Fractional Differential Equations2025-09-26T16:13:54+08:00Samah Alisamahiqr@gmail.comEihab Bashierebashier@su.edu.om<p>This paper provides non-standard finite difference methods for solving a Caputo-type fractional linear system with two equations with real eigenvalues. The linear system's real eigenvalues are classified into two types: distinct and repeated eigenvalues. The scenario of repeated eigenvalues is classified into two categories based on whether the dimension of the corresponding eigenspace is one or two. For each of the three scenarios, we obtained the exact solution and developed the numerator and denominator functions for the nonstandard finite difference scheme. Each of the three proposed numerical scheme's convergence has been established by proving consistency and stability. We showed that each of the proposed techniques is unconditionally stable when the system's eigenvalues are negative. Three examples were used to demonstrate the performance of the proposed methods.</p>2025-07-30T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2566Properties on Micro Pre-Covering Map in Micro Topological Spaces2025-09-26T16:13:55+08:00Stanley Roshan. Sstanleyroshan20@gmail.comP. Sathishmohanstanleyroshan20@gmail.com K. Rajalakshmistanleyroshan20@gmail.comS. Mythilistanleyroshan20@gmail.com<p>The basic objective of this research work is to introduce and investigate the properties of micro semi-continuous, micro pre-continuous, micro pre-irresolute map, micro pre-covering map.</p>2025-07-17T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2633Investigation of super lehmer – 3 mean labeling in theta related graphs2025-09-26T16:13:55+08:00D. Praveen Prabhuprof.praveenprabhu@gmail.com O. V. Shanmuga Sundaramprof.praveenprabhu@gmail.com<p>In graph theory, labeling of vertices and edges according to specific rules offers insight into the structural properties of graphs and their potential applications. This study focuses on a particular type of graph labeling known as Super Lehmer-3 Mean Labeling. A graph is said to admit such a labeling if there exists an injective vertex labeling such that the corresponding edge labels, derived from the Lehmer-3 mean of their incident vertex labels, are also distinct and satisfy certain arithmetic conditions. Graphs that admit such a labeling are termed Super Lehmer-3 Mean Graphs.<br>In this work, we investigate the existence of super Lehmer-3 mean labeling in various families of theta-related graphs. Specifically, we analyze standard theta graphs and their structural variations obtained by subdivisions and attachments of pendant vertices. The results confirm that these modified forms of theta graphs also admit super Lehmer-3 mean labeling. Additionally, we extend our exploration to harmonic mean labeling in selected theta graphs, highlighting the conditions under which this alternative labeling scheme is valid.<br>The findings presented in this paper contribute to the growing body of knowledge in graph labeling theory, offering new characterizations and constructions of labeled graph families. These results may have implications for theoretical studies and applications where such labeling structures are relevant.</p>2025-08-25T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2811A Hybrid Fusion Approach for Skin Cancer Detection using Deep Learning on Clinical Images and Machine Learning on Patient Metadata2025-09-26T16:13:56+08:00Aya Saber Abd El Aziz Omran as2855@fayoum.edu.egMohamed Mohamed El-Gazzarm_elgazzar@cs.mti.edu.egMary Monir Saeid mmh04@fayoum.edu.eg<p>Skin cancer continues to be a global health issue, with early detection being critical for improving treatment outcomes. While deep learning models like convolutional neural networks (CNNs) have proven highly efficient in skin cancer classification from dermatological images, they often disregard the valuable patient metadata that can contribute to better diagnostic accuracy. In the present study, we introduce a multimodal late fusion framework that integrates both skin cancer images and patient metadata. The approach leverages the Inception-ResNet-v2 (IRv2) model to extract image features, and a stacking ensemble model consisting of Extra Trees and Random Forest classifiers for patient metadata preprocessing. Then, a final voting classifier applying a soft voting strategy, which aggregates class probabilities from Logistic Regression and Random Forest as base voters, is employed for the final fusion methodology. This leads to an accuracy of 95.9% on the HAM10000 dataset. Our results highlight the potential of multimodal approaches in healthcare applications.</p>2025-08-02T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computinghttp://47.88.85.238/index.php/soic/article/view/2591Application of Ujlayan-Dixit (UD) Fractional Gamma With Two-Parameters Probability Distribution2025-09-26T16:13:57+08:00Iqbal H. Jebrili.jebril@zuj.edu.joIqbal Batihaiqbalbatiha22@yahoo.comNadia Allouchnadia.allouch.etu@univ-mosta.dz<p>The main goal in this research is to use the Ujlayan-Dixit (UD) fractional derivative to generate a new fractional probability density function for the two-parameters gamma<br>distribution, and develop some applications of this new distribution like cumulative distribution, survival and hazard functions. Additionally, we will establish other concepts<br>and applications for continuous random variables by applying the UD fractional analogues of statistical measures including expectation, rth-moments, rth-central moments,<br>variance and standard deviation. Finally, the UD fractional entropy measures, such as Shannon, Tsallis, and R´enyi entropy, are explained.</p>2025-09-09T00:00:00+08:00Copyright (c) 2025 Statistics, Optimization & Information Computing