-
Binary Peacock Algorithm: A Novel Metaheuristic Approach for Feature Selection J. Classif. (IF 2.0) Pub Date : 2024-03-04 Hema Banati, Richa Sharma, Asha Yadav
-
Inferential Tools for Assessing Dependence Across Response Categories in Multinomial Models with Discrete Random Effects J. Classif. (IF 2.0) Pub Date : 2024-03-04
Abstract We propose a discrete random effects multinomial regression model to deal with estimation and inference issues in the case of categorical and hierarchical data. Random effects are assumed to follow a discrete distribution with an a priori unknown number of support points. For a K-categories response, the modelling identifies a latent structure at the highest level of grouping, where groups
-
Prediction of Forest Fire Risk for Artillery Military Training using Weighted Support Vector Machine for Imbalanced Data J. Classif. (IF 2.0) Pub Date : 2024-03-04 Ji Hyun Nam, Jongmin Mun, Seongil Jo, Jaeoh Kim
-
Supervised Classification of High-Dimensional Correlated Data: Application to Genomic Data J. Classif. (IF 2.0) Pub Date : 2024-02-28 Aboubacry Gaye, Abdou Ka Diongue, Seydou Nourou Sylla, Maryam Diarra, Amadou Diallo, Cheikh Talla, Cheikh Loucoubar
-
Soft Label Guided Unsupervised Discriminative Sparse Subspace Feature Selection J. Classif. (IF 2.0) Pub Date : 2024-01-25 Keding Chen, Yong Peng, Feiping Nie, Wanzeng Kong
-
Variable Selection for Hidden Markov Models with Continuous Variables and Missing Data J. Classif. (IF 2.0) Pub Date : 2024-01-23 Fulvia Pennoni, Francesco Bartolucci, Silvia Pandofi
-
Nonparametric Cognitive Diagnosis When Attributes Are Polytomous J. Classif. (IF 2.0) Pub Date : 2024-01-11 Youn Seon Lim
Cognitive diagnosis models provide diagnostic information on whether examinees have mastered the skills, called “attributes,” that characterize a given knowledge domain. Based on attribute mastery, distinct proficiency classes are defined to which examinees are assigned based on their item responses. Attributes are typically perceived as binary. However, polytomous attributes may yield higher precision
-
Parsimonious Seemingly Unrelated Contaminated Normal Cluster-Weighted Models J. Classif. (IF 2.0) Pub Date : 2024-01-08
Abstract Normal cluster-weighted models constitute a modern approach to linear regression which simultaneously perform model-based cluster analysis and multivariate linear regression analysis with random quantitative regressors. Robustified models have been recently developed, based on the use of the contaminated normal distribution, which can manage the presence of mildly atypical observations. A
-
Unsupervised Classification with a Family of Parsimonious Contaminated Shifted Asymmetric Laplace Mixtures J. Classif. (IF 2.0) Pub Date : 2024-01-06
Abstract A family of parsimonious contaminated shifted asymmetric Laplace mixtures is developed for unsupervised classification of asymmetric clusters in the presence of outliers and noise. A series of constraints are applied to a modified factor analyzer structure of the component scale matrices, yielding a family of twelve models. Application of the modified factor analyzer structure and these parsimonious
-
funLOCI: A Local Clustering Algorithm for Functional Data J. Classif. (IF 2.0) Pub Date : 2023-12-07 Jacopo Di Iorio, Simone Vantini
-
Clustered Sparse Structural Equation Modeling for Heterogeneous Data J. Classif. (IF 2.0) Pub Date : 2023-11-30 Ippei Takasawa, Kensuke Tanioka, Hiroshi Yadohisa
-
Classification Under Partial Reject Options J. Classif. (IF 2.0) Pub Date : 2023-11-25 Måns Karlsson, Ola Hössjer
-
Model-Based Clustering with Nested Gaussian Clusters J. Classif. (IF 2.0) Pub Date : 2023-11-13 Jason Hou-Liu, Ryan P. Browne
-
Logistic Normal Multinomial Factor Analyzers for Clustering Microbiome Data J. Classif. (IF 2.0) Pub Date : 2023-11-07 Wangshu Tu, Sanjeena Subedi
-
Missing Values and Directional Outlier Detection in Model-Based Clustering J. Classif. (IF 2.0) Pub Date : 2023-10-31 Hung Tong, Cristina Tortora
-
Multiclass Sparse Discriminant Analysis Incorporating Graphical Structure Among Predictors J. Classif. (IF 2.0) Pub Date : 2023-10-14 Jingxuan Luo, Xuejiao Li, Chongxiu Yu, Gaorong Li
-
Optimal Band Selection Using Evolutionary Machine Learning to Improve the Accuracy of Hyper-spectral Images Classification: a Novel Migration-Based Particle Swarm Optimization J. Classif. (IF 2.0) Pub Date : 2023-09-16 Milad Vahidi, Sina Aghakhani, Diego Martín, Hossein Aminzadeh, Mehrdad Kaveh
-
On Model-Based Clustering of Directional Data with Heavy Tails J. Classif. (IF 2.0) Pub Date : 2023-09-12 Yingying Zhang, Volodymyr Melnykov, Igor Melnykov
-
Expanding the Class of Global Objective Functions for Dissimilarity-Based Hierarchical Clustering J. Classif. (IF 2.0) Pub Date : 2023-09-04 Sebastien Roch
-
Two Simple but Efficient Algorithms to Recognize Robinson Dissimilarities J. Classif. (IF 2.0) Pub Date : 2023-08-18 M. Carmona, V. Chepoi, G. Naves, P. Préa
-
A Robust Contextual Fuzzy C-Means Clustering Algorithm for Noisy Image Segmentation J. Classif. (IF 2.0) Pub Date : 2023-08-09 Karim Kalti, Asma Touil
-
Do Prior Information on Performance of Individual Classifiers for Fusion of Probabilistic Classifier Outputs Matter? J. Classif. (IF 2.0) Pub Date : 2023-07-22 Jordan Felicien MASAKUNA, Pierre Katalay Kafunda
-
A Survey on Model-Based Co-Clustering: High Dimension and Estimation Challenges J. Classif. (IF 2.0) Pub Date : 2023-07-17 C. Biernacki, J. Jacques, C. Keribin
-
CPclus: Candecomp/Parafac Clustering Model for Three-Way Data J. Classif. (IF 2.0) Pub Date : 2023-06-15 Donatella Vicari, Paolo Giordani
-
Dynamic Kernel Clustering by Spider Monkey Optimization Algorithm J. Classif. (IF 2.0) Pub Date : 2023-06-18 Vaishali P. Patel, L. K. Vishwamitra
-
Matrix-Variate Hidden Markov Regression Models: Fixed and Random Covariates J. Classif. (IF 2.0) Pub Date : 2023-06-12 Salvatore D. Tomarchio, Antonio Punzo, Antonello Maruotti
-
Zero-Inflated Time Series Clustering Via Ensemble Thick-Pen Transform J. Classif. (IF 2.0) Pub Date : 2023-06-12 Minji Kim, Hee-Seok Oh, Yaeji Lim
-
E-ReMI: Extended Maximal Interaction Two-mode Clustering J. Classif. (IF 2.0) Pub Date : 2023-05-10 Zaheer Ahmed, Alberto Cassese, Gerard van Breukelen, Jan Schepers
-
Computing Finite Mixture Estimators in the Tails J. Classif. (IF 2.0) Pub Date : 2023-04-13 Marilena Furno
-
Local and Overall Deviance R-Squared Measures for Mixtures of Generalized Linear Models J. Classif. (IF 2.0) Pub Date : 2023-04-04 Roberto Di Mari, Salvatore Ingrassia, Antonio Punzo
-
Characteristics of Distance Matrices Based on Euclidean, Manhattan and Hausdorff Coefficients J. Classif. (IF 2.0) Pub Date : 2023-04-03 J. T. Temple
-
Finding the Proverbial Needle: Improving Minority Class Identification Under Extreme Class Imbalance J. Classif. (IF 2.0) Pub Date : 2023-02-23 Trent Geisler, Herman Ray, Ying Xie
-
Classification Trees with Mismeasured Responses J. Classif. (IF 2.0) Pub Date : 2023-02-16 Liqun Diao, Grace Y. Yi
-
Model-Based Clustering and Classification Using Mixtures of Multivariate Skewed Power Exponential Distributions J. Classif. (IF 2.0) Pub Date : 2023-02-15 Utkarsh J. Dang, Michael P.B. Gallaugher, Ryan P. Browne, Paul D. McNicholas
-
DDCAL: Evenly Distributing Data into Low Variance Clusters Based on Iterative Feature Scaling J. Classif. (IF 2.0) Pub Date : 2023-01-25 Marian Lux, Stefanie Rinderle-Ma
-
Uncertainty Diagnostics of Binomial Regression Trees for Ordered Rating Data J. Classif. (IF 2.0) Pub Date : 2023-01-21 Rosaria Simone
-
A Semi-parametric Density Estimation with Application in Clustering J. Classif. (IF 2.0) Pub Date : 2022-12-14 Mahdi Salehi, Andriette Bekker, Mohammad Arashi
-
Merging Components in Linear Gaussian Cluster-Weighted Models J. Classif. (IF 2.0) Pub Date : 2022-12-07 Sangkon Oh, Byungtae Seo
-
Imputation Strategies for Clustering Mixed-Type Data with Missing Values J. Classif. (IF 2.0) Pub Date : 2022-11-26 Rabea Aschenbruck, Gero Szepannek, Adalbert F. X. Wilhelm
-
Group-Wise Shrinkage Estimation in Penalized Model-Based Clustering J. Classif. (IF 2.0) Pub Date : 2022-10-11 Alessandro Casa, Andrea Cappozzo, Michael Fop
-
A Kemeny Distance-Based Robust Fuzzy Clustering for Preference Data J. Classif. (IF 2.0) Pub Date : 2022-10-07 Pierpaolo D’Urso, Vincenzina Vitale
-
Multinomial Principal Component Logistic Regression on Shape Data J. Classif. (IF 2.0) Pub Date : 2022-10-01 Meisam Moghimbeygi, Anahita Nodehi
-
Hierarchical Means Clustering J. Classif. (IF 2.0) Pub Date : 2022-09-23 Maurizio Vichi, Carlo Cavicchia, Patrick J. F. Groenen
-
Infinite Mixtures of Multivariate Normal-Inverse Gaussian Distributions for Clustering of Skewed Data J. Classif. (IF 2.0) Pub Date : 2022-08-23 Yuan Fang, Dimitris Karlis, Sanjeena Subedi
-
Understanding the Adjusted Rand Index and Other Partition Comparison Indices Based on Counting Object Pairs J. Classif. (IF 2.0) Pub Date : 2022-07-22 Matthijs J. Warrens, Hanneke van der Hoef
In unsupervised machine learning, agreement between partitions is commonly assessed with so-called external validity indices. Researchers tend to use and report indices that quantify agreement between two partitions for all clusters simultaneously. Commonly used examples are the Rand index and the adjusted Rand index. Since these overall measures give a general notion of what is going on, their values
-
Finite Mixture of Censored Linear Mixed Models for Irregularly Observed Longitudinal Data J. Classif. (IF 2.0) Pub Date : 2022-07-08 Francisco H. C. de Alencar, Larissa A Matos, Víctor H. Lachos
-
Community Detection in Feature-Rich Networks Using Data Recovery Approach J. Classif. (IF 2.0) Pub Date : 2022-07-06 Boris Mirkin, Soroosh Shalileh
-
Network-Based Discriminant Analysis for Multiclassification J. Classif. (IF 2.0) Pub Date : 2022-06-02 Li-Pang Chen
-
Complex Principal Component Analysis: Theory and Geometrical Aspects J. Classif. (IF 2.0) Pub Date : 2022-05-06 Jean-Jacques Denimal, Sergio Camiz
-
Batch Self-Organizing Maps for Distributional Data with an Automatic Weighting of Variables and Components J. Classif. (IF 2.0) Pub Date : 2022-03-18 Francisco de A. T. de Carvalho, Antonio Irpino, Rosanna Verde, Antonio Balzanella
This paper deals with a batch self organizing map algorithm for data described by distributional-valued variables (DBSOM). Such variables are characterized to take as values probability or frequency distributions on numeric support. According to the nature of the data, the loss function is based on the L2 Wasserstein distance, that is one of the most used metrics to compare distributions in the context
-
Editorial: Journal of Classification Vol. 39-1 J. Classif. (IF 2.0) Pub Date : 2022-03-01 Paul D. McNicholas
-
On Assessments of Agreement Between Fuzzy Partitions J. Classif. (IF 2.0) Pub Date : 2022-02-28 Jeffrey L. Andrews, Ryan Browne, Chelsey D. Hvingelby
We extend the literature regarding assessments of agreement between soft/fuzzy/probabilistic cluster allocations by providing closed-form approaches for two measures which behave as fuzzy generalizations of the popular adjusted Rand index (ARI): one novel and one previously requiring a Monte Carlo estimation process. Both of these measures retain the reflexive property of the ARI—an arguably essential
-
Supervised Classification for Link Prediction in Facebook Ego Networks With Anonymized Profile Information J. Classif. (IF 2.0) Pub Date : 2022-02-03 Riccardo Giubilei, Pierpaolo Brutti
Social networks are very dynamic objects where nodes and links are continuously added or removed. Hence, an important but challenging task is link prediction, that is, to predict the likelihood of a future association between any two nodes. We use a classification approach to perform link prediction on data retrieved from Facebook in the typical form of ego networks. In addition to the more traditional
-
Model Selection Strategies for Determining the Optimal Number of Overlapping Clusters in Additive Overlapping Partitional Clustering J. Classif. (IF 2.0) Pub Date : 2022-01-17 Julian Rossbroich, Jeffrey Durieux, Tom F. Wilderjans
In various scientific fields, researchers make use of partitioning methods (e.g., K-means) to disclose the structural mechanisms underlying object by variable data. In some instances, however, a grouping of objects into clusters that are allowed to overlap (i.e., assigning objects to multiple clusters) might lead to a better representation of the underlying clustering structure. To obtain an overlapping
-
Erratum to: The Spatial Representation of Consumer Dispersion Patterns via a New Multi-level Latent Class Methodology J. Classif. (IF 2.0) Pub Date : 2021-12-28 Sunghoon Kim,Ashley Stadler Blank,Wayne S. DeSarbo,Jeroen K. Vermunt
-
Editorial: Journal of Classification Vol. 38-3. J. Classif. (IF 2.0) Pub Date : 2021-12-11 Paul D McNicholas
-
Ordinal Trees and Random Forests: Score-Free Recursive Partitioning and Improved Ensembles J. Classif. (IF 2.0) Pub Date : 2021-12-04 Gerhard Tutz
Existing ordinal trees and random forests typically use scores that are assigned to the ordered categories, which implies that a higher scale level is used. Versions of ordinal trees are proposed that take the scale level seriously and avoid the assignment of artificial scores. The construction principle is based on an investigation of the binary models that are implicitly used in parametric ordinal
-
High-Dimensional Clustering via Random Projections J. Classif. (IF 2.0) Pub Date : 2021-11-22 Anderlucci, Laura, Fortunato, Francesca, Montanari, Angela
This work addresses the unsupervised classification issue for high-dimensional data by exploiting the general idea of Random Projection Ensemble. Specifically, we propose to generate a set of low-dimensional independent random projections and to perform model-based clustering on each of them. The top B∗ projections, i.e., the projections which show the best grouping structure, are then retained. The
-
Co-clustering of Time-Dependent Data via the Shape Invariant Model J. Classif. (IF 2.0) Pub Date : 2021-10-06 Casa, Alessandro, Bouveyron, Charles, Erosheva, Elena, Menardi, Giovanna
Multivariate time-dependent data, where multiple features are observed over time for a set of individuals, are increasingly widespread in many application domains. To model these data, we need to account for relations among both time instants and variables and, at the same time, for subject heterogeneity. We propose a new co-clustering methodology for grouping individuals and variables simultaneously
-
Chimeral Clustering J. Classif. (IF 2.0) Pub Date : 2021-10-02 Hou-Liu, Jason, Browne, Ryan P.
Hybrid species tend to exhibit a mixture of parent characteristics; we propose chimeral clusters as exhibiting a mixture of parent parameters, a type of intercluster structure. Morphometric measurements in the iris dataset describe the hybrid Iris versicolor as intermediate to those of parent species Iris setosa and Iris virginica, which motivates our extension of Gaussian mixture models to allow mixing