Input-output configuration search and feature selection are key challenges in machine learning, particularly for highdimensional datasets. The Model Input-Output Configuration Search with Embedded Feature Selection (MICS-EFS) framework addresses these challenges by integrating Feature Selection (FS) and Neural Architecture Search (NAS) into a modular approach. In this study, the core components of the MICSEFS - measurement, search strategy, and modeling methodology - were systematically analyzed and extended to evaluate their individual and combined impact on performance. Simpler feature set with straightforward patterns were compared to more complex feature sets offering richer data representations. These were paired with both a Convolutional Neural Network (CNN) classifier and a more advanced Vision Transformer (ViT) architecture. Results demonstrate that the simpler feature set with the CNN classifier achieves high accuracy and fast convergence, while the complex feature set paired with the ViT achieves the highest overall accuracy of $\mathbf{9 8. 8 5 \%}$ on MNIST dataset. The inclusion of the reconstruction that captures internal data dependencies by leveraging an enhanced modeling methodology, consistently improves performance across all cases. This comprehensive evaluation highlights the impact of extending MICS-EFS and demonstrates robust performance and adaptability.
- Címlap
- Publikációk
- Robust Superiority of the MICS-EFS Configuration Search Algorithm Through Modular Extensions of Complex Neural Architectures