Accepted Papers

Accepted Papers

Main Track: Full Papers

Reason Explanation for Encouraging Behaviour Change Intention
Amal Abdulrahman, Deborah Richards and Ayse Aysin Bilgin

Off-Policy Exploitability-Evaluation in Two-Player Zero-Sum Markov Games
Kenshi Abe and Yusuke Kaneko

Siting and sizing of charging infrastructure for shared autonomous electric fleets
Ramin Ahadi, Wolfgang Ketter, John Collins and Nicolò Daina

Minimum-delay Adaptation in Non-Stationary Reinforcement Learning via Online High-Confidence Change-Point Detection
Lucas N. Alegre, Ana L. C. Bazzan and Bruno C. da Silva

Interrogating the Black Box: Transparency through Information-Seeking Dialogues
Andrea Aler Tubella, Andreas Theodorou and Juan Carlos Nieves

Cooperation and Reputation Dynamics with Reinforcement Learning
Nicolas Anastassacos, Julian Garcia, Stephen Hailes and Mirco Musolesi

State-Aware Variational Thompson Sampling for Deep Q-Networks
Siddharth Aravindan and Wee Sun Lee

Multi-Robot Task Allocation—Complexity and Approximation
Haris Aziz, Hau Chan, Agnes Cseh, Bo Li, Fahimeh Ramezani and Chenhao Wang

Robustness based on Accountability in Multiagent Organizations
Matteo Baldoni, Cristina Baroglio, Roberto Micalizio and Stefano Tedeschi

Predicting Voting Outcomes in Presence of Communities
Jacques Bara, Omer Lev and Paolo Turrini

Cooperative Prioritized Sweeping
Eugenio Bargiacchi, Timothy Verstraeten and Diederik M. Roijers

Existence and Computation of Maximin Fair Allocations Under Matroid-Rank Valuations
Siddharth Barman and Paritosh Verma

Complexity of Sequential Rules in Judgment Aggregation
Dorothea Baumeister, Linus Boes and Robin Weishaupt

Complexity of Scheduling and Predicting Round-Robin Tournaments
Dorothea Baumeister and Tobias Alexander Hogrebe

Optimising Long-Term Outcomes using Real-World Fluent Objectives: An Application to Football
Ryan Beal, Georgios Chalkiadakis, Timothy Norman and Sarvapali Ramchurn

Action Priors for Large Action Spaces in Robotics
Ondrej Biza, Dian Wang, Robert Platt, Jan-Willem van de Meent and Lawson L.S. Wong

Manipulability of Thiele Methods on Party-List Profiles
Sirin Botan

Egalitarian Judgment Aggregation
Sirin Botan, Ronald de Haan, Marija Slavkovik and Zoi Terzopoulou

Decision Model for a Virtual Agent that can Touch and be Touched
Fabien Boucaud, Catherine Pelachaud and Indira Thouvenin

Knowledge Improvement and Diversity under Interaction-Driven Adaptation of Learned Ontologies
Yasser Bourahla, Manuel Atencia and Jérôme Euzenat

On the Indecisiveness of Kelly-Strategyproof Social Choice Functions
Felix Brandt, Martin Bullinger and Patrick Lederer

High-Multiplicity Fair Allocation Made More Practical
Robert Bredereck, Aleksander Figiel, Andrzej Kaczmarczyk, Dušan Knop and Rolf Niedermeier

Multi-Agent Coordination in Adversarial Environments through Signal Mediated Strategies
Federico Cacciamani, Andrea Celli, Marco Ciccone and Nicola Gatti

Imitation Learning from Pixel-Level Demonstrations by HashReward
Xin-Qiang Cai, Yao-Xiang Ding, Yuan Jiang and Zhi-Hua Zhou

Worst-case Bounds for Spending a Common Budget
Pierre Cardi, Laurent Gourvès and Julien Lesca

Classifying the Complexity of the Possible Winner Problem on Partial Chains
Vishal Chakraborty and Phokion Kolaitis

Tractable mechanisms for computing near-optimal utility functions
Rahul Chandan, Dario Paccagnan and Jason R. Marden

Temporal Watermarks for Deep Reinforcement Learning Models
Kangjie Chen, Shangwei Guo, Tianwei Zhang, Shuxin Li and Yang Liu

A Game Theoretical Analysis of Non-Linear Blockchain System
Lin Chen, Lei Xu, Zhimin Gao, Ahmed Sunny, Keshav Kasichainula and Weidong Shi

A General Trust Framework for Multi-Agent Systems
Mingxi Cheng, Chenzhong Yin, Junyao Zhang, Shahin Nazarian, Jyotirmoy Deshmukh and Paul Bogdan

Scalable Anytime Planning for Multi-Agent MDPs
Shushman Choudhury, Jayesh Gupta, Peter Morales and Mykel Kochenderfer

Moblot: Molecular Oblivious Robots
Serafino Cicerone, Alessia Di Fonso, Gabriele Di Stefano and Alfredo Navarra

Spatial Consensus-Prevention in Robotic Swarms
Saar Cohen and Noa Agmon

Rational Synthesis in the Commons with Careless and Careful Agents
Rodica Condurache, Catalin Dima, Youssouf Oualhadj and Nicolas Troquard

Loss Bounds for Approximate Influence-Based Abstraction
Elena Congeduti, Alexander Mey and Frans Oliehoek

Improved Cooperation by Exploiting a Common Signal
Panayiotis Danassis, Zeki Doruk Erden and Boi Faltings

A Heuristic Algorithm for Multi-Agent Vehicle Routing with Automated Negotiation
Dave De Jonge, Filippo Bistaffa and Jordi Levy

Walrasian Equilibria in Markets with Small Demands
Argyrios Deligkas, Themistoklis Melissourgos and Paul Spirakis

Modeling Replicator Dynamics in Stochastic Games Using Markov Chain Method
Chuang Deng, Zhihai Rong, Lin Wang and Xiaofan Wang

Explaining BDI agent behaviour through dialogue
Louise Dennis and Nir Oren

Network Robustness via Global k-cores
Palash Dey, Suman Kalyan Maity, Sourav Medya and Arlei Silva

Efficient Nonmyopic Online Allocation of Scarce Reusable Resources
Zehao Dong, Sanmay Das, Patrick Fowler and Chien-Ju Ho

Learning Correlated Communication Topology in Multi-Agent Reinforcement learning
Yali Du, Bo Liu, Vincent Moens, Ziqi Liu, Zhicheng Ren, Jun Wang, Xu Chen and Haifeng Zhang

Log-time Prediction Markets for Interval Securities
Miroslav Dudík, Xintong Wang, David Pennock and David Rothschild

An Abstraction-based Method to Check Multi-Agent Deep Reinforcement-Learning Behaviors
Pierre El Mqirmi, Francesco Belardinelli and Borja G. León

Safe Multi-Agent Reinforcement Learning via Shielding
Ingy Elsayed-Aly, Suda Bharadwaj, Christopher Amato, Rüdiger Ehlers, Ufuk Topcu and Lu Feng

An knowledge compilation map for conditional preference statements-based languages
Helene Fargier and Jérôme Mengin

Self-Imitation Advantage Learning
Johan Ferret, Olivier Pietquin and Matthieu Geist

Strategyproof Facility Location Mechanisms on DiscreteTrees
Alina Filimonov and Reshef Meir

Equilibrium Refinements for Multi-Agent Influence Diagrams: Theory and Practice
Lewis Hammond, James Fox, Tom Everitt, Alessandro Abate and Michael Wooldridge

Probabilistic Control Argumentation Frameworks
Fabrice Gaignier, Yannis Dimopoulos, Jean-Guy Mailly and Pavlos Moraitis

Quantified Announcements and Common Knowledge
Rustam Galimullin and Thomas Ågotnes

Partially Observable Mean Field Reinforcement Learning
Sriram Ganapathi Subramanian, Matthew Taylor, Mark Crowley and Pascal Poupart

On a Notion of Monotonic Support for Bipolar Argumentation Frameworks
Anis Gargouri, Sébastien Konieczny, Pierre Marquis and Srdjan Vesic

Action Selection For Composable Modular Deep Reinforcement Learning
Vaibhav Gupta, Daksh Anand, Praveen Paruchuri and Akshat Kumar

Multivariate Analysis of Scheduling Fair Competitions
Siddharth Gupta and Meirav Zehavi

Multi-Agent Reinforcement Learning with Temporal Logic Specifications
Lewis Hammond, Alessandro Abate, Julian Gutierrez and Michael Wooldridge

A Hotelling-Downs Framework for Party Nominees
Paul Harrenstein, Grzegorz Lisowski, Ramanujan Sridharan and Paolo Turrini

Cooperative-Competitive Reinforcement Learning with History-Dependent Rewards
Keyang He, Bikramjit Banerjee and Prashant Doshi

Equilibrium Learning in Combinatorial Auctions: Computing Approximate Bayesian Nash Equilibria via Pseudogradient Dynamics
Stefan Heidekrueger, Paul Sutterer, Nils Kohring, Maximilian Fichtl and Martin Bichler

Learning Node-Selection Strategies in Bounded Suboptimal Conflict-Based Search for Multi-Agent Path Finding
Taoan Huang, Bistra Dilkina and Sven Koenig

Show Me the Way: Intrinsic Motivation from Demonstrations
Leonard Hussenot, Robert Dadashi, Matthieu Geist and Olivier Pietquin

Action Advising with Advice Imitation in Deep Reinforcement Learning
Ercument Ilhan, Jeremy Gow and Diego Perez Liebana

Probabilistic Inference of Winners in Elections by Independent Random Voters
Aviram Imber and Benny Kimelfeld

Computing the Extremal Possible Ranks with Incomplete Preferences
Aviram Imber and Benny Kimelfeld

Trader-Company Method: A Metaheuristic for Interpretable Stock Price Prediction
Katsuya Ito, Kentaro Minami, Kentaro Imajo and Kei Nakagawa

Partition Aggregation for Participatory Budgeting
Pallavi Jain, Nimrod Talmon and Laurent Bulteau

Grid-to-Graph: Flexible Spatial Relational Inductive Biases for Reinforcement Learning
Zhengyao Jiang, Pasquale Minervini, Minqi Jiang and Tim Rocktäschel

Committee Selection using Attribute Approvals
Venkateswara Rao Kagita, Arun K Pujari, Vineet Padmanabhan, Haris Aziz and Vikas Kumar

Mechanism Design for Housing Markets over Social Networks
Takehiro Kawasaki, Ryoji Wada, Taiki Todo and Makoto Yokoo

Knowing Why — On the Dynamics of Knowledge about Actual Causes in the Situation Calculus
Shakil Khan and Yves Lespérance

Beyond To Act or Not to Act: Fast Lagrangian Approaches to General Multi-Action Restless Bandits
Jackson Killian, Andrew Perrault and Milind Tambe

Feasible Coalition Sequences
Tabajara Krausburg, Jürgen Dix and Rafael H. Bordini

Adaptive Operating Hours for Improved Performance of Taxi Fleets
Rajiv Ranjan Kumar, Pradeep Varakantham and Shih-Fen Cheng

Approval-Based Shortlisting
Martin Lackner and Jan Maly

Aggregating Bipolar Opinions
Stefan Lauren, Francesco Belardinelli and Francesca Toni

The Price is (Probably) Right: Learning Market Equilibria from Samples
Omer Lev, Neel Patel, Vignesh Viswanathan and Yair Zick

Deep Implicit Coordination Graphs for Multi-agent Reinforcement Learning
Sheng Li, Jayesh K. Gupta, Peter Morales, Ross Allen and Mykel J. Kochenderfer

Parallel Curriculum Experience Replay in Distributed Reinforcement Learning
Yuyu Li and Jianmin Ji

Structured Diversification Emergence via Reinforced Organization Control and Hierachical Consensus Learning
Wenhao Li, Xiangfeng Wang, Bo Jin, Junjie Sheng, Yun Hua and Hongyuan Zha

Let the DOCTOR Decide Whom to Test: Adaptive Testing Strategies to Tackle the COVID-19 Pandemic
Yu Liang and Amulya Yadav

Axies: Identifying and Evaluating Context-Specific Values
Enrico Liscio, Michiel van der Meer, Luciano Cavalcante Siebert, Catholijn M. Jonker, Niek Mouter and Pradeep K. Murukannaiah

Energy Based Imitation Learning
Minghuan Liu, Tairan He, Minkai Xu and Weinan Zhang

Deceptive Reinforcement Learning for Privacy-Preserving Planning
Zhengshang Liu, Yue Yang, Tim Miller and Peta Masters

A Logic of Evaluation
Emiliano Lorini

Exploration of Indoor Environments Predicting the Layout of Partially Observed Rooms
Matteo Luperto, Luca Fochetta and Francesco Amigoni

Contrasting Centralized and Decentralized Critics in Multi-Agent Reinforcement Learning
Xueguang Lyu, Yuchen Xiao, Brett Daley and Christopher Amato

Modeling the Interaction between Agents in Cooperative Multi-Agent Reinforcement Learning
Xiaoteng Ma, Yiqin Yang, Chenghao Li, Qianchuan Zhao, Jun Yang and Yiwen Lu

To hold or not to hold? – Reducing Passenger Missed Connections in Airlines using Reinforcement Learning
Tejasvi Malladi, Karpagam Murugappan, Depak Sudarsanam, Ramasubramanian Suriyanarayanan and Arunchandar Vasan

Extended Goal Recognition: a Planning-Based Model for Strategic Deception
Peta Masters, Michael Kirley and Wally Smith

Risk-Aware Interventions in Public Health: Planning with Restless Multi-Armed Bandits
Aditya Mate, Andrew Perrault and Milind Tambe

Identification of unexpected decisions in Partially Observable Monte Carlo Planning: a rule-based approach
Giulio Mazzi, Alberto Castellini and Alessandro Farinelli

Cooperation between Independent Reinforcement Learners under Wealth Inequality and Collective Risks
Ramona Merhej, Fernando P. Santos, Francisco S. Melo and Francisco C. Santos

Value-Guided Synthesis of Parametric Normative Systems
Nieves Montes and Carles Sierra

ELVIRA: an Explainable Agent for Value and Utility-driven Multiuser Privacy
Francesca Mosca and Jose M. Such

A Novelty-Centric Agent Architecture for Changing Worlds
Faizan Muhammad, Vasanth Sarathy, Jivko Sinapov, Matthias Scheutz, Gyan Tatiya, Saurav Gyawali, Shivam Goel and Mateo Guaman

Reward Machines for Cooperative Multi-Agent Reinforcement Learning
Cyrus Neary, Zhe Xu, Bo Wu and Ufuk Topcu

Adversarial learning in revenue-maximizing auctions
Thomas Nedelec, Jules Baudet, Vianney Perchet and Noureddine El Karoui

Multi-Agent Graph Attention Communication and Teaming
Yaru Niu, Rohan Paleja and Matthew Gombolay

Emergent Communication under Competition
Michael Noukhovitch, Travis LaCroix, Angeliki Lazaridou and Aaron Courville

Safe Pareto improvements for delegated game playing
Caspar Oesterheld and Vincent Conitzer

Active Screening for Recurrent Diseases: A Reinforcement Learning Approach
Han Ching Ou, Haipeng Chen, Shahin Jabbari and Milind Tambe

Group Fairness for Knapsack Problems
Deval Patel, Arindam Khan and Anand Louis

An Agent-Based Model to Predict Pedestrians Trajectories with an Autonomous Vehicle in Shared Spaces
Manon Prédhumeau, Lyuba Mancheva, Julie Dugdale and Anne Spalanzani

Latency-Aware Local Search for Distributed Constraint Optimization
Ben Rachmut, Roie Zivan and William Yeoh

Accelerating Recursive Partition-Based Causal Structure Learning
Md. Musfiqur Rahman, Ayman Rasheed, Md. Mosaddek Khan, Mohammad Ali Javidian, Pooyan Jamshidi and Md. Mamun-Or-Rashid

Peer-to-peer Autonomous Agent Communication Network
Lokman Rahmani, David Minarsch and Jonathan Ward

Nash Equilibria in Finite-Horizon Multiagent Concurrent Games
Senthil Rajasekaran and Moshe Vardi

MAPFAST: A Deep Algorithm Selector for Multi Agent Path Finding using Shortest Path Embeddings
Jingyao Ren, Vikraman Sathiyanarayanan, Eric Ewing, Baskin Senbaslar and Nora Ayanian

User and System Stories: an agile approach for managing requirements in AOSE
Sebastian Rodriguez, John Thangarajah and Michael Winikoff

Accumulating Risk Capital Through Investing in Cooperation
Charlotte Roman, Michael Dennis, Andrew Critch and Stuart Russell

TDprop: Does Adaptive Optimization With Jacobi Preconditioning Help Temporal Difference Learning?
Joshua Romoff, Peter Henderson, David Kanaa, Emmanuel Bengio, Ahmed Touati, Pierre-Luc Bacon and Joelle Pineau

Cooperative and Competitive Biases for Multi-Agent Reinforcement Learning
Heechang Ryu, Hayong Shin and Jinkyoo Park

SEERL : Sample Efficient Ensemble Reinforcement Learning
Rohan Saphal, Balaraman Ravindran, Dheevatsa Mudigere, Sasikanth Avancha and Bharat Kaul

Efficiently Guiding Imitation Learning Agents with Human Gaze
Akanksha Saran, Ruohan Zhang, Elaine Schaertl Short and Scott Niekum

SPOTTER: Extending Symbolic Planning Operators through Targeted Reinforcement Learning
Vasanth Sarathy, Daniel Kasenberg, Shivam Goel, Jivko Sinapov and Matthias Scheutz

A Local Search Based Approach to Solve Continuous DCOPs
Amit Sarker, Moumita Choudhury and Md. Mosaddek Khan

CMCF: An architecture for realtime gesture generation by Clustering gestures by Motion and Communicative Function
Carolyn Saund, Andrei Bîrlădeanu and Stacy Marsella

Timely Information from Prediction Markets
Grant Schoenebeck, Chenkai Yu and Fang-Yi Yu

Partial Robustness in Team Formation: Bridging the Gap between Robustness and Resilience
Nicolas Schwind, Emir Demirović, Katsumi Inoue and Jean Marie Lagniez

An Autonomous Negotiating Agent Framework with Reinforcement Learning based Strategies and Adaptive Strategy Switching Mechanism
Ayan Sengupta, Yasser Mohammad and Shinji Nakadai

Sequential Ski Rental Problem
Anant Shah and Arun Rajkumar

Multiagent Epidemiologic Inference through Realtime Contact Tracing
Guni Sharon, James Ault, Peter Stone, Varun Kompella and Roberto Capobianco

Cooperative Policy Learning with Pre-trained Heterogeneous Observation Representation
Wenlei Shi, Xinran Wei, Jia Zhang, Xiaoyuan Ni, Arthur Jiang, Jiang Bian and Tie-Yan Liu

Cyber Attack Intent Recognition and Active Deception using Factored Interactive POMDPs
Aditya Shinde, Prashant Doshi and Omid Setayeshfar

Sequential Mechanisms for Multi-type Resource Allocation
Sujoy Sikdar, Xiaoxi Guo, Haibin Wang, Lirong Xia and Yongzhi Cao

Active Perception Within BDI Agents Reasoning Cycle
Gustavo Silva, Jomi Hübner and Leandro Becker

AlwaysSafe: Reinforcement Learning without Safety Constraint Violations during Training
Thiago D. Simão, Nils Jansen and Matthijs T. J. Spaan

Rankings for Bipartite Tournaments via Chain Editing
Joseph Singleton and Richard Booth

Towards Transferrable Personalized Student Models in Educational Games
Samuel Spaulding, Jocelyn Shen, Haewon Park and Cynthia Breazeal

Regular Model Checking Approach to Knowledge Reasoning over Parameterized Systems
Daniel Stan and Anthony Widjaja Lin

Achieving Sybil-proofness in Distributed Work Systems
Alexander Stannat, Can Umut Ileri, Dion Gijswijt and Johan Pouwelse

Mean-Payoff Games with Omega-Regular Specifications
Thomas Steeples, Julian Gutierrez and Michael Wooldridge

Connections between Fairness Criteria and Efficiency for Allocating Indivisible Chores
Ankang Sun, Bo Chen and Xuan Vinh Doan

Grab the Reins of Crowds: Estimating the Effects of Crowd Movement Guidance Using Causal Inference
Koh Takeuchi, Ryo Nishida, Hisashi Kashima and Masaki Onishi

Guiding Evolutionary Strategies with Off-Policy Actor-Critic
Yunhao Tang

Learning Complex Policy Distribution with CEM Guided Adversarial Hypernetwork
Shi Yuan Tang, Athirai A. Irissappane, Frans A. Oliehoek and Jie Zhang

Adaptive Cascade Submodular Maximization
Shaojie Tang and Jing Yuan

Efficient Exact Computation of Setwise Minimax Regret for Interactive Preference Elicitation
Federico Toffano, Paolo Viappiani and Nic Wilson

Collaborative Multiagent Decision Making for Lane-Free Autonomous Driving
Dimitrios Troullinos, Georgios Chalkiadakis, Ioannis Papamichail and Markos Papageorgiou

No More Hand-Tuning Rewards: Masked Constrained Policy Optimization for Safe Reinforcement Learning
Stef Van Havermaet, Yara Khaluf and Pieter Simoens

Reinforcement Learning for Unified Allocation and Patrolling in Signaling Games with Uncertainty
Aravind Venugopal, Elizabeth Bondi, Harshavardhan Kamarthi, Keval Dholakia, Balaraman Ravindran and Milind Tambe

Scalable Optimization for Wind Farm Control using Coordination Graphs
Timothy Verstraeten, Pieter-Jan Daems, Eugenio Bargiacchi, Diederik M. Roijers, Pieter Libin and Jan Helsen

Mechanism Design for Public Projects via Neural Networks
Guanhua Wang, Runqi Guo, Yuko Sakurai, Muhammad Ali Babar and Mingyu Guo

Fairness and Efficiency in Facility Location Problems with Continuous Demands
Chenhao Wang and Mengqi Zhang

Strategic Evasion of Centrality Measures
Marcin Waniek, Jan Woźnica, Kai Zhou, Yevgeniy Vorobeychik, Talal Rahwan and Tomasz Michalak

Transferable Environment Poisoning: Training-time Attack on Reinforcement Learning
Hang Xu, Rundong Wang, Lev Raizman and Zinovi Rabinovich

Drone Formation Control via Belief-Correlated Imitation Learning
Bo Yang, Chaofan Ma and Xiaofang Xia

Intention Progression using Quantitative Summary Information
Yuan Yao, Natasha Alechina, Brian Logan and John Thangarajah

Scalable Multiagent Driving Policies For Reducing Traffic Congestion
Jiaxun Cui, William Macke, Harel Yedidsion, Aastha Goyal, Daniel Urieli and Peter Stone

A Computational Model of Coping for simulating human behavior in high-stress situations
Nutchanon Yongsatianchot and Stacy Marsella

Evolution of Strategies in Sequential Security Games
Adam Żychowski and Jacek Mańdziuk

Main Track: Extended Abstracts

How to Amend a Constitution? Model, Axioms, and Supermajority Rules
Ben Abramowitz, Ehud Shapiro and Nimrod Talmon

Learning Competitive Equilibria in Noisy Combinatorial Markets
Enrique Areyan Viqueira, Cyrus Cousins and Amy Greenwald

Interpretive Blindness and the Impossibility of Learning from Testimony
Nicholas Asher and Julie Hunter

Quantifying Human Perception with Multi-Armed Bandits
Julien Audiffren

Modelling Cooperation in Network Games with Spatio-Temporal Complexity
Michiel Bakker, Richard Everett, Laura Weidinger, Iason Gabriel, William Isaac, Joel Leibo and Edward Hughes

Image Sequence Understanding through Narrative Sensemaking
Zev Battad and Mei Si

Maximizing Influence-Based Group Shapley Centrality
Ruben Becker, Gianlorenzo D’Angelo and Hugo Gilbert

How to Guide a Non-Cooperative Learner to Cooperate: Exploiting No-Regret Algorithms in System Design
Nicholas Bishop, Le Cong Dinh and Long Tran-Thanh

Learning Index Policies for Restless Bandits with Application to Maternal Healthcare
Arpita Biswas, Gaurav Aggarwal, Pradeep Varakantham and Milind Tambe

CHARET: Character-centered Approach to Emotion Tracking in Stories
Diogo Carvalho, Joana Campos, Manuel Guimarães, Ana Antunes, João Dias and Pedro A. Santos

On the Sensory Commutativity of Action Sequences for Embodied Agents
Hugo Caselles-Dupré, Michael Garcia-Ortiz and David Filliat

Difference Rewards Policy Gradients
Jacopo Castellini, Sam Devlin, Frans A. Oliehoek and Rahul Savani

Learning to Cooperate with Unseen Agents Through Meta-Reinforcement Learning
Rujikorn Charakorn, Poramate Manoonpong and Nat Dilokthanakul

Promoting Fair Proposers, Fair Responders or Both? Cost-Efficient Interference in the Spatial Ultimatum Game
Theodor Cimpeanu, Cedric Perret and The Anh Han

A Logic of Inferable in Multi-Agent Systems with Budget and Costs
Stefania Costantini, Andrea Formisano and Valentina Pitoni

Stratified Experience Replay: Correcting Multiplicity Bias in Off-Policy Reinforcement Learning
Brett Daley, Cameron Hickert and Christopher Amato

A Generic Multi-Agent Model for Resource Allocation Strategies in Online On-Demand Transport with Autonomous Vehicles
Alaa Daoud, Flavien Balbo, Paolo Gianessi and Gauthier Picard

A Multi-Arm Bandit Approach To Subset Selection Under Constraints
Ayush Deva, Kumar Abhishek and Sujit Gujar

It’s A Match! Gesture Generation Using Expressive Parameter Matching
Ylva Ferstl, Michael Neff and Rachel McDonnell

Partially Cooperative Multi-Agent Periodic Indivisible Resource Allocation
Yuval Gabai Schlosberg and Roie Zivan

Pick Your Battles: Interaction Graphs as Population-Level Objectives for Strategic Diversity
Marta Garnelo, Wojciech Marian Czarnecki, Siqi Liu, Dhruva Tirumala, Junhyuk Oh, Gauthier Gidel, Hado van Hasselt and David Balduzzi

Allocating teams to tasks: an anytime heuristic ccompetence-based approach
Athina Georgara, Juan Antonio Rodriguez Aguilar and Carles Sierra

Shielding Atari Games with Bounded Prescience
Mirco Giacobbe, Mohammadhosein Hasanbeig, Daniel Kroening and Hjalmar Wijk

Comparison of Desynchronization Methods for a Decentralized Swarm on a Logistical Resupply Problem
Joseph Giordano, Annie Wu, Arjun Pherwani and H. David Mathias

Towards Decentralized Social Reinforcement Learning via Ego-Network Extrapolation
Mahak Goindani and Jennifer Neville

A Global Multi-Sided Market with Ascending-Price Mechanism
Rica Gonen and Erel Segal-Halevi

Rank Aggregation by Dissatisfaction Minimisation in the Unavailable Candidate Model
Arnaud Grivet Sébert, Nicolas Maudet, Patrice Perny and Paolo Viappiani

Sequential and Swap Mechanisms for Public Housing Allocation with Quotas and Neighbourhood-Based Utilities
Nathanaël Gross-Humbert, Nawal Benabbou, Aurélie Beynier and Nicolas Maudet

Teaching Unknown Learners to Classify via Feature Importance
Carla Guerra, Francisco S. Melo and Manuel Lopes

Simultaneous Learning of Moving and Active Perceptual Policies for Autonomous Robot
Wataru Hatanaka, Fumihiro Sasaki, Ryota Yamashina and Atsuo Kawaguchi

Distributional Monte Carlo Tree Search for Risk-Aware and Multi-Objective Reinforcement Learning
Conor F Hayes, Mathieu Reymond, Diederik M. Roijers, Enda Howley and Patrick Mannion

Approximating Spatial Evolutionary Games using Bayesian Networks
Vincent Hsiao, Xinyue Pan, Dana Nau and Rina Dechter

Balancing Rational and Other-Regarding Preferences in Cooperative-Competitive Environments
Dmitry Ivanov, Vladimir Egorov and Aleksei Shpilman

We might walk together but I run faster: Network Fairness and Scalability in Blockchains
Anurag Jain, Shoeb Siddiqui and Sujit Gujar

Preserving Consistency for Liquid Knapsack Voting
Pallavi Jain, Krzysztof Sornat and Nimrod Talmon

Strategic Abilities of Asynchronous Agents: Semantic Side Effects
Wojciech Jamroga, Wojciech Penczek and Teofil Sidoruk

Solving 3D Bin Packing Problem via Multimodal Deep Reinforcement Learning
Yuan Jiang, Zhiguang Cao and Jie Zhang

Toward Consistent Agreement Approximation in Abstract Argumentation and Beyond
Timotheus Kampik and Juan Carlos Nieves

Coverage Control under Connectivity Constraints
Shota Kawajiri, Kazuki Hirashima and Masashi Shiraishi

Solver Agent: Towards Emotional and Opponent-Aware Agent for Human-Robot Negotiation
Mehmet Onur Keskin, Umut Çakan and Reyhan Aydoğan

Evaluating the Robustness of Collaborative Agents
Paul Knott, Micah Carroll, Sam Devlin, Kamil Ciosek, Katja Hofmann, Anca Dragan and Rohin Shah

On weakly and strongly popular rankings
Sonja Kraiczy, Ágnes Cseh and David Manlove

Fairness in Long-Term Participatory Budgeting
Martin Lackner, Jan Maly and Simon Rey

RPPLNS: Pay-per-last-N-shares with a Randomised Twist
Philip Lazos, Francisco Javier Marmolejo Cossío, Xinyu Zhou and Jonathan Katz

Partial Disclosure of Private Dependencies in Privacy Preserving Planning
Rotem Lev Lehman, Guy Shani and Roni Stern

Learning Cooperative Solution Concepts From Voting Behavior: A Case Study on the Israeli Knesset
Omer Lev, Wei Lu, Alan Tsang and Yair Zick

Anytime Multi-Agent Path Finding via Large Neighborhood Search
Jiaoyang Li, Zhe Chen, Daniel Harabor, Peter J. Stuckey and Sven Koenig

Object Allocation Over a Network of Objects:Mobile Agents with Strict Preferences
Fu Li, C. Gregory Plaxton and Vaibhav B. Sinha

Reliability-Aware Multi-UAV Coverage Path Planning using a Genetic Algorithm
Mickey Li, Arthur Richards and Mahesh Sooriyabandara

Solid Semantics and Extension Aggregation Using Quota Rules under Integrity Constraints
Xiaolong Liu and Weiwei Chen

Call Markets with Adaptive Clearing Intervals
Buhong Liu, Maria Polukarov, Carmine Ventre, Lingbo Li and Leslie Kanthan

Trajectory Diversity for Zero-Shot Coordination
Andrei Lupu, Hengyuan Hu and Jakob Foerster

Branch-and-Bound Heuristics for Incomplete DCOPs
Atena M. Tabakhi, Yuanming Xiao, William Yeoh and Roie Zivan

Optimized Execution of PDDL Plans using Behavior Trees
Francisco Martín Rico, Matteo Morelli, Huascar Espinoza, Francisco J. Rodríguez Lera and Vicente Matellán Olivera

A Strategic Analysis of Portfolio Compression
Katherine Mayo and Michael P. Wellman

A General Framework for the Logical Representation of Combinatorial Exchange Protocols
Munyque Mittelmann, Sylvain Bouveret and Laurent Perrussel

A Privacy-Preserving and Accountable Multi-agent Learning Framework
Anudit Nagar, Cuong Tran and Ferdinando Fioretto

SIBRE: Self Improvement Based REwards for Adaptive Feedback in Reinforcement Learning
Somjit Nath, Richa Verma, Abhik Ray and Harshad Khadilkar

Tunable Behaviours in Sequential Social Dilemmas using Multi-Objective Reinforcement Learning
David O’Callaghan and Patrick Mannion

Online Learning of Shaping Reward with Subgoal Knowledge
Takato Okudo and Seiji Yamada

Attention Actor-Critic algorithm for Multi-Agent Constrained Co-operative Reinforcement Learning
P. Parnika, Raghuram Bharadwaj Diddigi, Sai Koti Reddy Danda and Shalabh Bhatnagar

Toward a Self-Learning Governance Loop for Competitive Multi-Attribute MAS
Michael Pernpeintner

Personalising the Dialogue of Relational Agents for First-Time Users
Hedieh Ranjbartabar, Deborah Richards, Ayse Aysin Bilgin and Cat Kutay

Finite-time Consensus in the Presence of Malicious Agents
Sachit Rao and Shrisha Rao

Multiagent Task Allocation and Planning with Multi-Objective Requirements
Thomas Robinson, Guoxin Su and Minjie Zhang

An Autonomous Drive Balancing Strategy for the Design of Purpose in Open-ended Learning Robots
Alejandro Romero, Francisco Bellas and Richard J. Duro

Combining LSTMs and Symbolic Approaches for Robust Plan Recognition
Leonardo Rosa Amado, Ramon Fraga Pereira and Felipe Meneguzzi

Dynamic Skill Selection for Learning Joint Actions
Enna Sachdeva, Shauharda Khadka, Somdeb Majumdar and Kagan Tumer

Mitigating Negative Side Effects via Environment Shaping
Sandhya Saisubramanian and Shlomo Zilberstein

Analyzing the Benefits of Object Transfer in the Distributed Collection Problem
Christopher Sanford and Jae Oh

Social Network Interventions to Prevent Reciprocity-driven Polarization
Fernando P. Santos, Francisco C. Santos, Jorge M. Pacheco and Simon Levin

HOAD: The Hanabi Open Agent Dataset
Aron Sarmasi, Timothy Zhang, Chu-Hung Cheng, Huyen Pham, Xuanchen Zhou, Duong Nguyen, Soumil Shekdar and Joshua McCoy

Egalitarian and Just Digital Currency Networks
Gal Shahaf, Ehud Shapiro and Nimrod Talmon

MAS-Bench: Parameter Optimization Benchmark for Multi-agent Crowd Simulation
Shusuke Shigenaka, Shunki Takami, Shuhei Watanabe, Yuki Tanigaki, Yoshihiko Ozaki and Masaki Onishi

Approximate Difference Rewards for Scalable Multigent Reinforcement Learning
Arambam James Singh, Akshat Kumar and Hoong Chuin Lau

Self-Attention Meta-Learner for Continual Learning
Ghada Sokar, Decebal Constantin Mocanu and Mykola Pechenizkiy

A Succinct Representation Scheme for Cooperative Games under Uncertainty
Errikos Streviniotis, Athina Georgara and Georgios Chalkiadakis

Gambler Bandits and the Regret of Being Ruined
Filipo Studzinski Perotto, Sattar Vakili, Pratik Gajane, Yaser Faghan and Mathieu Bourgais

A Distributional Perspective on Value Function Factorization Methods for Multi-Agent Reinforcement Learning
Wei-Fang Sun, Cheng-Kuang Lee and Chun-Yi Lee

Intrinsic Motivated Multi-Agent Communication
Chuxiong Sun, Bo Wu, Rui Wang, Xiaohui Hu, Xiaoya Yang and Cong Cong

Sound Algorithms in Imperfect Information Games
Michal Sustr, Martin Schmid, Matej Moravčík, Neil Burch, Marc Lanctot and Michael Bowling

Cohorting to isolate asymptomatic spreaders: An agent-based simulation study on the Mumbai Suburban Railway
Alok Talekar, Sharad Shriram, Nidhin Vaidhiyan, Gaurav Aggarwal, Jiangzhuo Chen, Srini Venkatramanan, Lijing Wang, Aniruddha Adiga, Adam Sadilek, Ashish Tendulkar, Madhav Marathe, Rajesh Sundaresan and Milind Tambe

Eliciting fairness in multiplayer bargaining through network-based role assignment
Andreia Sofia Teixeira, Francisco C. Santos, Alexandre P Francisco and Fernando P. Santos

Learning Robust Helpful Behaviors in Two-Player Cooperative Atari Environments
Paul Tylkin, Goran Radanovic and David Parkes

Towards Sample Efficient Learners in Population based Referential Games through Action Advising
Shresth Verma

The Tight Bound for Pure Price of Anarchy in An Extended Miners’ Dilemma Game
Qian Wang and Yurong Chen

Distributed Q-Learning with State Tracking for Multi-agent Networked Control
Hang Wang, Sen Lin, Hamid Jafarkhani and Junshan Zhang

The Sabre Narrative Planner: Multi-Agent Coordination with Intentions and Beliefs
Stephen Ware and Cory Siler

Learning Policies for Effective Incentive Allocation in Unknown Social Networks
Shiqing Wu, Quan Bai and Weihua Li

Optimal Crowdfunding Design
Xiang Yan and Yiling Chen

A Blockchain-Enabled Quantitative Approach to Trust and Reputation Management with Sparse Evidence
Leonit Zeynalvand, Tie Luo, Ewa Andrejczuk, Dusit Niyato, Sin G. Teo and Jie Zhang

Fast Adaptation to External Agents via Meta-Imitation Counterfactual Regret Advantage
Mingyue Zhang, Zhi Jin, Yang Xu, Zehan Shen, Kun Liu and Keyu Pan

Deep Interactive Bayesian Reinforcement Learning via Meta-Learning
Luisa Zintgraf, Sam Devlin, Kamil Ciosek, Shimon Whiteson and Katja Hofmann