样式: 排序: IF: - GO 导出 标记为已读
-
Efficient Multiplayer Battle Game Optimizer for Adversarial Robust Neural Architecture Search arXiv.cs.NE Pub Date : 2024-03-15 Rui Zhong, Yuefeng Xu, Chao Zhang, Jun Yu
This paper introduces a novel metaheuristic algorithm, known as the efficient multiplayer battle game optimizer (EMBGO), specifically designed for addressing complex numerical optimization tasks. The motivation behind this research stems from the need to rectify identified shortcomings in the original MBGO, particularly in search operators during the movement phase, as revealed through ablation experiments
-
Single- and Multi-Agent Private Active Sensing: A Deep Neuroevolution Approach arXiv.cs.NE Pub Date : 2024-03-15 George Stamatelis, Angelos-Nikolaos Kanatas, Ioannis Asprogerakas, George C. Alexandropoulos
In this paper, we focus on one centralized and one decentralized problem of active hypothesis testing in the presence of an eavesdropper. For the centralized problem including a single legitimate agent, we present a new framework based on NeuroEvolution (NE), whereas, for the decentralized problem, we develop a novel NE-based method for solving collaborative multi-agent tasks, which interestingly maintains
-
A Conceptual Framework For White Box Neural Networks arXiv.cs.NE Pub Date : 2024-03-14 Maciej Satkiewicz
This paper introduces semantic features as a general conceptual framework for fully explainable neural network layers. A well-motivated proof of concept model for relevant subproblem of MNIST consists of 4 such layers with the total of 4.8K learnable parameters. The model is easily interpretable, achieves human-level adversarial test accuracy with no form of adversarial training, requires little hyperparameter
-
Emotional Intelligence Through Artificial Intelligence : NLP and Deep Learning in the Analysis of Healthcare Texts arXiv.cs.NE Pub Date : 2024-03-14 Prashant Kumar Nag, Amit Bhagat, R. Vishnu Priya, Deepak kumar Khare
This manuscript presents a methodical examination of the utilization of Artificial Intelligence in the assessment of emotions in texts related to healthcare, with a particular focus on the incorporation of Natural Language Processing and deep learning technologies. We scrutinize numerous research studies that employ AI to augment sentiment analysis, categorize emotions, and forecast patient outcomes
-
Towards a theory of model distillation arXiv.cs.NE Pub Date : 2024-03-14 Enric Boix-Adsera
Distillation is the task of replacing a complicated machine learning model with a simpler model that approximates the original [BCNM06,HVD15]. Despite many practical applications, basic questions about the extent to which models can be distilled, and the runtime and amount of data needed to distill, remain largely open. To study these questions, we initiate a general theory of distillation, defining
-
The Runtime of Random Local Search on the Generalized Needle Problem arXiv.cs.NE Pub Date : 2024-03-13 Benjamin Doerr, Andrew James Kelley
In their recent work, C. Doerr and Krejca (Transactions on Evolutionary Computation, 2023) proved upper bounds on the expected runtime of the randomized local search heuristic on generalized Needle functions. Based on these upper bounds, they deduce in a not fully rigorous manner a drastic influence of the needle radius $k$ on the runtime. In this short article, we add the missing lower bound necessary
-
Ant Colony Sampling with GFlowNets for Combinatorial Optimization arXiv.cs.NE Pub Date : 2024-03-11 Minsu Kim, Sanghyeok Choi, Jiwoo Son, Hyeonah Kim, Jinkyoo Park, Yoshua Bengio
This paper introduces the Generative Flow Ant Colony Sampler (GFACS), a novel neural-guided meta-heuristic algorithm for combinatorial optimization. GFACS integrates generative flow networks (GFlowNets) with the ant colony optimization (ACO) methodology. GFlowNets, a generative model that learns a constructive policy in combinatorial spaces, enhance ACO by providing an informed prior distribution of
-
MAP-Elites with Transverse Assessment for Multimodal Problems in Creative Domains arXiv.cs.NE Pub Date : 2024-03-11 Marvin Zammit, Antonios Liapis, Georgios N. Yannakakis
The recent advances in language-based generative models have paved the way for the orchestration of multiple generators of different artefact types (text, image, audio, etc.) into one system. Presently, many open-source pre-trained models combine text with other modalities, thus enabling shared vector embeddings to be compared across different generators. Within this context we propose a novel approach
-
Multiple Population Alternate Evolution Neural Architecture Search arXiv.cs.NE Pub Date : 2024-03-11 Juan Zou, Han Chu, Yizhang Xia, Junwen Xu, Yuan Liu, Zhanglu Hou
The effectiveness of Evolutionary Neural Architecture Search (ENAS) is influenced by the design of the search space. Nevertheless, common methods including the global search space, scalable search space and hierarchical search space have certain limitations. Specifically, the global search space requires a significant amount of computational resources and time, the scalable search space sacrifices
-
On the Robustness of Lexicase Selection to Contradictory Objectives arXiv.cs.NE Pub Date : 2024-03-11 Shakiba Shahbandegan, Emily Dolson
Lexicase and epsilon-lexicase selection are state of the art parent selection techniques for problems featuring multiple selection criteria. Originally, lexicase selection was developed for cases where these selection criteria are unlikely to be in conflict with each other, but preliminary work suggests it is also a highly effective many-objective optimization algorithm. However, to predict whether
-
Long-term Frame-Event Visual Tracking: Benchmark Dataset and Baseline arXiv.cs.NE Pub Date : 2024-03-09 Xiao Wang, Ju Huang, Shiao Wang, Chuanming Tang, Bo Jiang, Yonghong Tian, Jin Tang, Bin Luo
Current event-/frame-event based trackers undergo evaluation on short-term tracking datasets, however, the tracking of real-world scenarios involves long-term tracking, and the performance of existing tracking algorithms in these scenarios remains unclear. In this paper, we first propose a new long-term and large-scale frame-event single object tracking dataset, termed FELT. It contains 742 videos
-
Algorithm-Hardware Co-Design of Distribution-Aware Logarithmic-Posit Encodings for Efficient DNN Inference arXiv.cs.NE Pub Date : 2024-03-08 Akshat Ramachandran, Zishen Wan, Geonhwa Jeong, John Gustafson, Tushar Krishna
Traditional Deep Neural Network (DNN) quantization methods using integer, fixed-point, or floating-point data types struggle to capture diverse DNN parameter distributions at low precision, and often require large silicon overhead and intensive quantization-aware training. In this study, we introduce Logarithmic Posits (LP), an adaptive, hardware-friendly data type inspired by posits that dynamically
-
On the Markov Property of Neural Algorithmic Reasoning: Analyses and Methods arXiv.cs.NE Pub Date : 2024-03-07 Montgomery Bohde, Meng Liu, Alexandra Saxton, Shuiwang Ji
Neural algorithmic reasoning is an emerging research direction that endows neural networks with the ability to mimic algorithmic executions step-by-step. A common paradigm in existing designs involves the use of historical embeddings in predicting the results of future execution steps. Our observation in this work is that such historical dependence intrinsically contradicts the Markov nature of algorithmic
-
A Survey of Lottery Ticket Hypothesis arXiv.cs.NE Pub Date : 2024-03-07 Bohan Liu, Zijie Zhang, Peixiong He, Zhensen Wang, Yang Xiao, Ruimeng Ye, Yang Zhou, Wei-Shinn Ku, Bo Hui
The Lottery Ticket Hypothesis (LTH) states that a dense neural network model contains a highly sparse subnetwork (i.e., winning tickets) that can achieve even better performance than the original model when trained in isolation. While LTH has been proved both empirically and theoretically in many works, there still are some open issues, such as efficiency and scalability, to be addressed. Also, the
-
Restricted Bayesian Neural Network arXiv.cs.NE Pub Date : 2024-03-06 Sourav Ganguly
Modern deep learning tools are remarkably effective in addressing intricate problems. However, their operation as black-box models introduces increased uncertainty in predictions. Additionally, they contend with various challenges, including the need for substantial storage space in large networks, issues of overfitting, underfitting, vanishing gradients, and more. This study explores the concept of
-
Lifelong Intelligence Beyond the Edge using Hyperdimensional Computing arXiv.cs.NE Pub Date : 2024-03-07 Xiaofan Yu, Anthony Thomas, Ivannia Gomez Moreno, Louis Gutierrez, Tajana Rosing
On-device learning has emerged as a prevailing trend that avoids the slow response time and costly communication of cloud-based learning. The ability to learn continuously and indefinitely in a changing environment, and with resource constraints, is critical for real sensor deployments. However, existing designs are inadequate for practical scenarios with (i) streaming data input, (ii) lack of supervision
-
Noisy Spiking Actor Network for Exploration arXiv.cs.NE Pub Date : 2024-03-07 Ding Chen, Peixi Peng, Tiejun Huang, Yonghong Tian
As a general method for exploration in deep reinforcement learning (RL), NoisyNet can produce problem-specific exploration strategies. Spiking neural networks (SNNs), due to their binary firing mechanism, have strong robustness to noise, making it difficult to realize efficient exploration with local disturbances. To solve this exploration problem, we propose a noisy spiking actor network (NoisySAN)
-
SWAP-NAS: Sample-Wise Activation Patterns For Ultra-Fast NAS arXiv.cs.NE Pub Date : 2024-03-07 Yameng Peng, Andy Song, Haytham M. Fayek, Vic Ciesielski, Xiaojun Chang
Training-free metrics (a.k.a. zero-cost proxies) are widely used to avoid resource-intensive neural network training, especially in Neural Architecture Search (NAS). Recent studies show that existing training-free metrics have several limitations, such as limited correlation and poor generalisation across different search spaces and tasks. Hence, we propose Sample-Wise Activation Patterns and its derivative
-
Neural Architecture Search using Particle Swarm and Ant Colony Optimization arXiv.cs.NE Pub Date : 2024-03-06 Séamus Lankford, Diarmuid Grimes
Neural network models have a number of hyperparameters that must be chosen along with their architecture. This can be a heavy burden on a novice user, choosing which architecture and what values to assign to parameters. In most cases, default hyperparameters and architectures are used. Significant improvements to model accuracy can be achieved through the evaluation of multiple architectures. A process
-
Sparse Spiking Neural Network: Exploiting Heterogeneity in Timescales for Pruning Recurrent SNN arXiv.cs.NE Pub Date : 2024-03-06 Biswadeep Chakraborty, Beomseok Kang, Harshit Kumar, Saibal Mukhopadhyay
Recurrent Spiking Neural Networks (RSNNs) have emerged as a computationally efficient and brain-inspired learning model. The design of sparse RSNNs with fewer neurons and synapses helps reduce the computational complexity of RSNNs. Traditionally, sparse SNNs are obtained by first training a dense and complex SNN for a target task, and, then, pruning neurons with low activity (activity-based pruning)
-
Explaining Genetic Programming Trees using Large Language Models arXiv.cs.NE Pub Date : 2024-03-06 Paula Maddigan, Andrew Lensen, Bing Xue
Genetic programming (GP) has the potential to generate explainable results, especially when used for dimensionality reduction. In this research, we investigate the potential of leveraging eXplainable AI (XAI) and large language models (LLMs) like ChatGPT to improve the interpretability of GP-based non-linear dimensionality reduction. Our study introduces a novel XAI dashboard named GP4NLDR, the first
-
Mem-elements based Neuromorphic Hardware for Neural Network Application arXiv.cs.NE Pub Date : 2024-03-05 Ankur Singh
The thesis investigates the utilization of memristive and memcapacitive crossbar arrays in low-power machine learning accelerators, offering a comprehensive co-design framework for deep neural networks (DNN). The model, implemented through a hybrid Python and PyTorch approach, accounts for various non-idealities, achieving exceptional training accuracies of 90.02% and 91.03% for the CIFAR-10 dataset
-
G-EvoNAS: Evolutionary Neural Architecture Search Based on Network Growth arXiv.cs.NE Pub Date : 2024-03-05 Juan Zou, Weiwei Jiang, Yizhang Xia, Yuan Liu, Zhanglu Hou
The evolutionary paradigm has been successfully applied to neural network search(NAS) in recent years. Due to the vast search complexity of the global space, current research mainly seeks to repeatedly stack partial architectures to build the entire model or to seek the entire model based on manually designed benchmark modules. The above two methods are attempts to reduce the search difficulty by narrowing
-
Evolution Transformer: In-Context Evolutionary Optimization arXiv.cs.NE Pub Date : 2024-03-05 Robert Tjarko Lange, Yingtao Tian, Yujin Tang
Evolutionary optimization algorithms are often derived from loose biological analogies and struggle to leverage information obtained during the sequential course of optimization. An alternative promising approach is to leverage data and directly discover powerful optimization principles via meta-optimization. In this work, we follow such a paradigm and introduce Evolution Transformer, a causal Transformer
-
SOFIM: Stochastic Optimization Using Regularized Fisher Information Matrix arXiv.cs.NE Pub Date : 2024-03-05 Gayathri C, Mrinmay Sen, A. K. Qin, Raghu Kishore N, Yen-Wei Chen, Balasubramanian Raman
This paper introduces a new stochastic optimization method based on the regularized Fisher information matrix (FIM), named SOFIM, which can efficiently utilize the FIM to approximate the Hessian matrix for finding Newton's gradient update in large-scale stochastic optimization of machine learning models. It can be viewed as a variant of natural gradient descent (NGD), where the challenge of storing
-
Encodings for Prediction-based Neural Architecture Search arXiv.cs.NE Pub Date : 2024-03-04 Yash Akhauri, Mohamed S. Abdelfattah
Predictor-based methods have substantially enhanced Neural Architecture Search (NAS) optimization. The efficacy of these predictors is largely influenced by the method of encoding neural network architectures. While traditional encodings used an adjacency matrix describing the graph structure of a neural network, novel encodings embrace a variety of approaches from unsupervised pretraining of latent
-
Toward Neuromic Computing: Neurons as Autoencoders arXiv.cs.NE Pub Date : 2024-03-04 Larry Bull
The computational capabilities of dendrites have become increasingly clear. This letter presents the idea that neural backpropagation is using dendritic processing to enable individual neurons to perform autoencoding. Using a very simple connection weight search heuristic and artificial neural network model, the effects of interleaving autoencoding for each neuron in a hidden layer of a feedforward
-
Deep Reinforcement Learning for Dynamic Algorithm Selection: A Proof-of-Principle Study on Differential Evolution arXiv.cs.NE Pub Date : 2024-03-04 Hongshu Guo, Yining Ma, Zeyuan Ma, Jiacheng Chen, Xinglin Zhang, Zhiguang Cao, Jun Zhang, Yue-Jiao Gong
Evolutionary algorithms, such as Differential Evolution, excel in solving real-parameter optimization challenges. However, the effectiveness of a single algorithm varies across different problem instances, necessitating considerable efforts in algorithm selection or configuration. This paper aims to address the limitation by leveraging the complementary strengths of a group of algorithms and dynamically
-
Universality of reservoir systems with recurrent neural networks arXiv.cs.NE Pub Date : 2024-03-04 Hiroki Yasumoto, Toshiyuki Tanaka
Approximation capability of reservoir systems whose reservoir is a recurrent neural network (RNN) is discussed. In our problem setting, a reservoir system approximates a set of functions just by adjusting its linear readout while the reservoir is fixed. We will show what we call uniform strong universality of a family of RNN reservoir systems for a certain class of functions to be approximated. This
-
Analysis and Fully Memristor-based Reservoir Computing for Temporal Data Classification arXiv.cs.NE Pub Date : 2024-03-04 Ankur Singh, Sanghyeon Choi, Gunuk Wang, Maryaradhiya Daimari, Byung-Geun Lee
Reservoir computing (RC) offers a neuromorphic framework that is particularly effective for processing spatiotemporal signals. Known for its temporal processing prowess, RC significantly lowers training costs compared to conventional recurrent neural networks. A key component in its hardware deployment is the ability to generate dynamic reservoir states. Our research introduces a novel dual-memory
-
Fast and Efficient Local Search for Genetic Programming Based Loss Function Learning arXiv.cs.NE Pub Date : 2024-03-01 Christian Raymond, Qi Chen, Bing Xue, Mengjie Zhang
In this paper, we develop upon the topic of loss function learning, an emergent meta-learning paradigm that aims to learn loss functions that significantly improve the performance of the models trained under them. Specifically, we propose a new meta-learning framework for task and model-agnostic loss function learning via a hybrid search approach. The framework first uses genetic programming to find
-
Parallel Algorithms for Exact Enumeration of Deep Neural Network Activation Regions arXiv.cs.NE Pub Date : 2024-02-29 Sabrina Drammis, Bowen Zheng, Karthik Srinivasan, Robert C. Berwick, Nancy A. Lynch, Robert Ajemian
A feedforward neural network using rectified linear units constructs a mapping from inputs to outputs by partitioning its input space into a set of convex regions where points within a region share a single affine transformation. In order to understand how neural networks work, when and why they fail, and how they compare to biological intelligence, we need to understand the organization and formation
-
Parallel Hyperparameter Optimization Of Spiking Neural Network arXiv.cs.NE Pub Date : 2024-03-01 Thomas Firmin, Pierre Boulet, El-Ghazali Talbi
Spiking Neural Networks (SNN). SNNs are based on a more biologically inspired approach than usual artificial neural networks. Such models are characterized by complex dynamics between neurons and spikes. These are very sensitive to the hyperparameters, making their optimization challenging. To tackle hyperparameter optimization of SNNs, we initially extended the signal loss issue of SNNs to what we
-
Event-Driven Learning for Spiking Neural Networks arXiv.cs.NE Pub Date : 2024-03-01 Wenjie Wei, Malu Zhang, Jilin Zhang, Ammar Belatreche, Jibin Wu, Zijing Xu, Xuerui Qiu, Hong Chen, Yang Yang, Haizhou Li
Brain-inspired spiking neural networks (SNNs) have gained prominence in the field of neuromorphic computing owing to their low energy consumption during feedforward inference on neuromorphic hardware. However, it remains an open challenge how to effectively benefit from the sparse event-driven property of SNNs to minimize backpropagation learning costs. In this paper, we conduct a comprehensive examination
-
Airport take-off and landing optimization through genetic algorithms arXiv.cs.NE Pub Date : 2024-02-29 Fernando Guedan Pecker, Cristian Ramirez Atencia
This research addresses the crucial issue of pollution from aircraft operations, focusing on optimizing both gate allocation and runway scheduling simultaneously, a novel approach not previously explored. The study presents an innovative genetic algorithm-based method for minimizing pollution from fuel combustion during aircraft take-off and landing at airports. This algorithm uniquely integrates the
-
Optimal ANN-SNN Conversion with Group Neurons arXiv.cs.NE Pub Date : 2024-02-29 Liuzhenghao Lv, Wei Fang, Li Yuan, Yonghong Tian
Spiking Neural Networks (SNNs) have emerged as a promising third generation of neural networks, offering unique characteristics such as binary outputs, high sparsity, and biological plausibility. However, the lack of effective learning algorithms remains a challenge for SNNs. For instance, while converting artificial neural networks (ANNs) to SNNs circumvents the need for direct training of SNNs, it
-
Spyx: A Library for Just-In-Time Compiled Optimization of Spiking Neural Networks arXiv.cs.NE Pub Date : 2024-02-29 Kade M. Heckel, Thomas Nowotny
As the role of artificial intelligence becomes increasingly pivotal in modern society, the efficient training and deployment of deep neural networks have emerged as critical areas of focus. Recent advancements in attention-based large neural architectures have spurred the development of AI accelerators, facilitating the training of extensive, multi-billion parameter models. Despite their effectiveness
-
Weighted strategies to guide a multi-objective evolutionary algorithm for multi-UAV mission planning arXiv.cs.NE Pub Date : 2024-02-28 Cristian Ramirez-Atencia, Javier Del Ser, David Camacho
Management and mission planning over a swarm of unmanned aerial vehicle (UAV) remains to date as a challenging research trend in what regards to this particular type of aircrafts. These vehicles are controlled by a number of ground control station (GCS), from which they are commanded to cooperatively perform different tasks in specific geographic areas of interest. Mathematically the problem of coordinating
-
Deep Neural Network Models Trained With A Fixed Random Classifier Transfer Better Across Domains arXiv.cs.NE Pub Date : 2024-02-28 Hafiz Tiomoko Ali, Umberto Michieli, Ji Joong Moon, Daehyun Kim, Mete Ozay
The recently discovered Neural collapse (NC) phenomenon states that the last-layer weights of Deep Neural Networks (DNN), converge to the so-called Equiangular Tight Frame (ETF) simplex, at the terminal phase of their training. This ETF geometry is equivalent to vanishing within-class variability of the last layer activations. Inspired by NC properties, we explore in this paper the transferability
-
Implementing Online Reinforcement Learning with Clustering Neural Networks arXiv.cs.NE Pub Date : 2024-02-28 James E. Smith
An agent employing reinforcement learning takes inputs (state variables) from an environment and performs actions that affect the environment in order to achieve some objective. Rewards (positive or negative) guide the agent toward improved future actions. This paper builds on prior clustering neural network research by constructing an agent with biologically plausible neo-Hebbian three-factor synaptic
-
Large Language Models As Evolution Strategies arXiv.cs.NE Pub Date : 2024-02-28 Robert Tjarko Lange, Yingtao Tian, Yujin Tang
Large Transformer models are capable of implementing a plethora of so-called in-context learning algorithms. These include gradient descent, classification, sequence completion, transformation, and improvement. In this work, we investigate whether large language models (LLMs), which never explicitly encountered the task of black-box optimization, are in principle capable of implementing evolutionary
-
Escaping Local Optima in Global Placement arXiv.cs.NE Pub Date : 2024-02-28 Ke Xue, Xi Lin, Yunqi Shi, Shixiong Kai, Siyuan Xu, Chao Qian
Placement is crucial in the physical design, as it greatly affects power, performance, and area metrics. Recent advancements in analytical methods, such as DREAMPlace, have demonstrated impressive performance in global placement. However, DREAMPlace has some limitations, e.g., may not guarantee legalizable placements under the same settings, leading to fragile and unpredictable results. This paper
-
Understanding the Role of Pathways in a Deep Neural Network arXiv.cs.NE Pub Date : 2024-02-28 Lei Lyu, Chen Pang, Jihua Wang
Deep neural networks have demonstrated superior performance in artificial intelligence applications, but the opaqueness of their inner working mechanism is one major drawback in their application. The prevailing unit-based interpretation is a statistical observation of stimulus-response data, which fails to show a detailed internal process of inherent mechanisms of neural networks. In this work, we
-
A Neural Rewriting System to Solve Algorithmic Problems arXiv.cs.NE Pub Date : 2024-02-27 Flavio Petruzzellis, Alberto Testolin, Alessandro Sperduti
Modern neural network architectures still struggle to learn algorithmic procedures that require to systematically apply compositional rules to solve out-of-distribution problem instances. In this work, we propose an original approach to learn algorithmic tasks inspired by rewriting systems, a classic framework in symbolic artificial intelligence. We show that a rewriting system can be implemented as
-
Reinforced In-Context Black-Box Optimization arXiv.cs.NE Pub Date : 2024-02-27 Lei Song, Chenxiao Gao, Ke Xue, Chenyang Wu, Dong Li, Jianye Hao, Zongzhang Zhang, Chao Qian
Black-Box Optimization (BBO) has found successful applications in many fields of science and engineering. Recently, there has been a growing interest in meta-learning particular components of BBO algorithms to speed up optimization and get rid of tedious hand-crafted heuristics. As an extension, learning the entire algorithm from data requires the least labor from experts and can provide the most flexibility
-
Benchmarking GPT-4 on Algorithmic Problems: A Systematic Evaluation of Prompting Strategies arXiv.cs.NE Pub Date : 2024-02-27 Flavio Petruzzellis, Alberto Testolin, Alessandro Sperduti
Large Language Models (LLMs) have revolutionized the field of Natural Language Processing thanks to their ability to reuse knowledge acquired on massive text corpora on a wide variety of downstream tasks, with minimal (if any) tuning steps. At the same time, it has been repeatedly shown that LLMs lack systematic generalization, which allows to extrapolate the learned statistical regularities outside
-
Exploratory Landscape Analysis for Mixed-Variable Problems arXiv.cs.NE Pub Date : 2024-02-26 Raphael Patrick Prager, Heike Trautmann
Exploratory landscape analysis and fitness landscape analysis in general have been pivotal in facilitating problem understanding, algorithm design and endeavors such as automated algorithm selection and configuration. These techniques have largely been limited to search spaces of a single domain. In this work, we provide the means to compute exploratory landscape features for mixed-variable problems
-
Efficient Online Learning for Networks of Two-Compartment Spiking Neurons arXiv.cs.NE Pub Date : 2024-02-25 Yujia Yin, Xinyi Chen, Chenxiang Ma, Jibin Wu, Kay Chen Tan
The brain-inspired Spiking Neural Networks (SNNs) have garnered considerable research interest due to their superior performance and energy efficiency in processing temporal signals. Recently, a novel multi-compartment spiking neuron model, namely the Two-Compartment LIF (TC-LIF) model, has been proposed and exhibited a remarkable capacity for sequential modelling. However, training the TC-LIF model
-
Q-FOX Learning: Breaking Tradition in Reinforcement Learning arXiv.cs.NE Pub Date : 2024-02-26 Mahmood Alqaseer, Yossra H. Ali, Tarik A. Rashid
Reinforcement learning (RL) is a subset of artificial intelligence (AI) where agents learn the best action by interacting with the environment, making it suitable for tasks that do not require labeled data or direct supervision. Hyperparameters (HP) tuning refers to choosing the best parameter that leads to optimal solutions in RL algorithms. Manual or random tuning of the HP may be a crucial process
-
Clustering in Dynamic Environments: A Framework for Benchmark Dataset Generation With Heterogeneous Changes arXiv.cs.NE Pub Date : 2024-02-24 Danial Yazdani, Juergen Branke, Mohammad Sadegh Khorshidi, Mohammad Nabi Omidvar, Xiaodong Li, Amir H. Gandomi, Xin Yao
Clustering in dynamic environments is of increasing importance, with broad applications ranging from real-time data analysis and online unsupervised learning to dynamic facility location problems. While meta-heuristics have shown promising effectiveness in static clustering tasks, their application for tracking optimal clustering solutions or robust clustering over time in dynamic environments remains
-
Prompting LLMs to Compose Meta-Review Drafts from Peer-Review Narratives of Scholarly Manuscripts arXiv.cs.NE Pub Date : 2024-02-23 Shubhra Kanti Karmaker Santu, Sanjeev Kumar Sinha, Naman Bansal, Alex Knipper, Souvika Sarkar, John Salvador, Yash Mahajan, Sri Guttikonda, Mousumi Akter, Matthew Freestone, Matthew C. Williams Jr
One of the most important yet onerous tasks in the academic peer-reviewing process is composing meta-reviews, which involves understanding the core contributions, strengths, and weaknesses of a scholarly manuscript based on peer-review narratives from multiple experts and then summarizing those multiple experts' perspectives into a concise holistic overview. Given the latest major developments in generative
-
A new approach for solving global optimization and engineering problems based on modified Sea Horse Optimizer arXiv.cs.NE Pub Date : 2024-02-21 Fatma A. Hashim, Reham R. Mostafa, Ruba Abu Khurma, Raneem Qaddoura, P. A. Castillo
Sea Horse Optimizer (SHO) is a noteworthy metaheuristic algorithm that emulates various intelligent behaviors exhibited by sea horses, encompassing feeding patterns, male reproductive strategies, and intricate movement patterns. To mimic the nuanced locomotion of sea horses, SHO integrates the logarithmic helical equation and Levy flight, effectively incorporating both random movements with substantial
-
An Effective Networks Intrusion Detection Approach Based on Hybrid Harris Hawks and Multi-Layer Perceptron arXiv.cs.NE Pub Date : 2024-02-21 Moutaz Alazab, Ruba Abu Khurma, Pedro A. Castillo, Bilal Abu-Salih, Alejandro Martin, David Camacho
This paper proposes an Intrusion Detection System (IDS) employing the Harris Hawks Optimization algorithm (HHO) to optimize Multilayer Perceptron learning by optimizing bias and weight parameters. HHO-MLP aims to select optimal parameters in its learning process to minimize intrusion detection errors in networks. HHO-MLP has been implemented using EvoloPy NN framework, an open-source Python tool specialized
-
Origami: (un)folding the abstraction of recursion schemes for program synthesis arXiv.cs.NE Pub Date : 2024-02-21 Matheus Campos Fernandes, Fabricio Olivetti de Franca, Emilio Francesquini
Program synthesis with Genetic Programming searches for a correct program that satisfies the input specification, which is usually provided as input-output examples. One particular challenge is how to effectively handle loops and recursion avoiding programs that never terminate. A helpful abstraction that can alleviate this problem is the employment of Recursion Schemes that generalize the combination
-
NeuralDiffuser: Controllable fMRI Reconstruction with Primary Visual Feature Guided Diffusion arXiv.cs.NE Pub Date : 2024-02-21 Haoyu Li, Hao Wu, Badong Chen
Reconstructing visual stimuli from functional Magnetic Resonance Imaging (fMRI) based on Latent Diffusion Models (LDM) provides a fine-grained retrieval of the brain. A challenge persists in reconstructing a cohesive alignment of details (such as structure, background, texture, color, etc.). Moreover, LDMs would generate different image results even under the same conditions. For these, we first uncover
-
Evolutionary Reinforcement Learning: A Systematic Review and Future Directions arXiv.cs.NE Pub Date : 2024-02-20 Yuanguo Lin, Fan Lin, Guorong Cai, Hong Chen, Lixin Zou, Pengcheng Wu
In response to the limitations of reinforcement learning and evolutionary algorithms (EAs) in complex problem-solving, Evolutionary Reinforcement Learning (EvoRL) has emerged as a synergistic solution. EvoRL integrates EAs and reinforcement learning, presenting a promising avenue for training intelligent agents. This systematic review firstly navigates through the technological background of EvoRL
-
A Neuro-Symbolic Approach to Multi-Agent RL for Interpretability and Probabilistic Decision Making arXiv.cs.NE Pub Date : 2024-02-21 Chitra Subramanian, Miao Liu, Naweed Khan, Jonathan Lenchner, Aporva Amarnath, Sarathkrishna Swaminathan, Ryan Riegel, Alexander Gray
Multi-agent reinforcement learning (MARL) is well-suited for runtime decision-making in optimizing the performance of systems where multiple agents coexist and compete for shared resources. However, applying common deep learning-based MARL solutions to real-world problems suffers from issues of interpretability, sample efficiency, partial observability, etc. To address these challenges, we present
-
A new simplified MOPSO based on Swarm Elitism and Swarm Memory: MO-ETPSO arXiv.cs.NE Pub Date : 2024-02-20 Ricardo Fitas
This paper presents an algorithm based on Particle Swarm Optimization (PSO), adapted for multi-objective optimization problems: the Elitist PSO (MO-ETPSO). The proposed algorithm integrates core strategies from the well-established NSGA-II approach, such as the Crowding Distance Algorithm, while leveraging the advantages of Swarm Intelligence in terms of individual and social cognition. A novel aspect
-
Function Class Learning with Genetic Programming: Towards Explainable Meta Learning for Tumor Growth Functionals arXiv.cs.NE Pub Date : 2024-02-19 E. M. C. Sijben, J. C. Jansen, P. A. N. Bosman, T. Alderliesten
Paragangliomas are rare, primarily slow-growing tumors for which the underlying growth pattern is unknown. Therefore, determining the best care for a patient is hard. Currently, if no significant tumor growth is observed, treatment is often delayed, as treatment itself is not without risk. However, by doing so, the risk of (irreversible) adverse effects due to tumor growth may increase. Being able
-
Mechanistic Neural Networks for Scientific Machine Learning arXiv.cs.NE Pub Date : 2024-02-20 Adeel Pervez, Francesco Locatello, Efstratios Gavves
This paper presents Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences. It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations, revealing the underlying dynamics of data and enhancing interpretability and efficiency in data modeling. Central to our approach is a novel