default search action
CoG 2022: Beijing, China
- IEEE Conference on Games, CoG 2022, Beijing, China, August 21-24, 2022. IEEE 2022, ISBN 978-1-6654-5989-1
- Hao Zeng, Wei Zhang, Keyu Chen, Zhimeng Zhang, Lincheng Li, Yu Ding:
Paste You Into Game: Towards Expression and Identity Consistency Face Swapping. 1-8 - Shiyu Huang, Chao Yu, Bin Wang, Dong Li, Yu Wang, Ting Chen, Jun Zhu:
VMAPD: Generate Diverse Solutions for Multi-Agent Games with Recurrent Trajectory Discriminators. 9-16 - Breno M. F. Viana, Leonardo T. Pereira, Claudio Fabiano Motta Toledo:
Illuminating the Space of Enemies Through MAP-Elites. 17-24 - Martina Mittermueller, Zhanxiang Ye, Helmut Hlavacs:
EST-GAN: Enhancing Style Transfer GANs with Intermediate Game Render Passes. 25-32 - Jiangke Lin, Lincheng Li, Yi Yuan, Zhengxia Zou:
Realistic Game Avatars Auto-Creation from Single Images via Three-pathway Network. 33-40 - Bas A. Plijnaer, Günter Wallner, Regina Bernhaupt:
Ethermal - Lightweight Thermal Feedback for VR Games. 41-48 - Yuqian Fu, Jiajun Chai, Yuanheng Zhu, Dongbin Zhao:
LILAC: Learning a Leader for Cooperative Reinforcement Learning. 49-55 - Junwoo Park, Youngwoo Cho, Gyuhyeon Sim, Hojoon Lee, Jaegul Choo:
Enemy Spotted: In-game Gun Sound Dataset for Gunshot Classification and Localization. 56-63 - Kacper Kenji Lesniak, Maria Maistro:
Crowdsourcing Controller - Utilizing Reliable Agents in a Multiplayer Game. 64-71 - Tristan Tomilin, Tianhong Dai, Meng Fang, Mykola Pechenizkiy:
LevDoom: A Benchmark for Generalization on Level Difficulty in Reinforcement Learning. 72-79 - Hao Chen, Rongkai Shi, Diego Monteiro, Nilufar Baghaei, Hai-Ning Liang:
VRCockpit: Mitigating Simulator Sickness in VR Games Using Multiple Egocentric 2D View Frames. 80-87 - Chintan Trivedi, Konstantinos Makantasis, Antonios Liapis, Georgios N. Yannakakis:
Learning Task-Independent Game State Representations from Unlabeled Images. 88-95 - Cheng Hao Ke, Haozhang Deng, Congda Xu, Jiong Li, Xingyun Gu, Borchuluun Yadamsuren, Diego Klabjan, Rafet Sifa, Anders Drachen, Simon Demediuk:
DOTA 2 match prediction through deep learning team fight models. 96-103 - Tim Pearce, Jun Zhu:
Counter-Strike Deathmatch with Large-Scale Behavioural Cloning. 104-111 - Gianluca Guglielmo, Paris Mavromoustakos Blom, Michal Klincewicz, Boris Cule, Pieter Spronck:
Face in the Game: Using Facial Action Units to Track Expertise in Competitive Video Game Play. 112-118 - Ziqi Wang, Jialin Liu:
Online Game Level Generation from Music. 119-126 - Youpeng Zhao, Jian Zhao, Xunhan Hu, Wengang Zhou, Houqiang Li:
DouZero+: Improving DouDizhu AI by Opponent Modeling and Coach-guided Learning. 127-134 - Sebastian S. Christiansen, Marco Scirea:
Space segmentation and multiple autonomous agents: a Minecraft settlement generator. 135-142 - Oliver Withington, Laurissa Tokarchuk:
Compressing and Comparing the Generative Spaces of Procedural Content Generators. 143-150 - Kuei-Tso Lee, Sheng-Jyh Wang:
Bayesian Opponent Exploitation by Inferring the Opponent's Policy Selection Pattern. 151-158 - Matthias Müller-Brockhausen, Aske Plaat, Mike Preuss:
Towards verifiable Benchmarks for Reinforcement Learning. 159-166 - Keyuan Zhang, Jiayu Bai, Jialin Liu:
Generating Game Levels of Diverse Behaviour Engagement. 167-174 - Francesco Garavaglia, Renato Avellar Nobre, Laura Anna Ripamonti, Dario Maggiorini, Davide Gadia:
Moody5: Personality-biased agents to enhance interactive storytelling in video games. 175-182 - Ben Boudaoud, Josef B. Spjut, Joohwan Kim:
Mouse Sensitivity in First-person Targeting Tasks. 183-190 - Matilda Tamm, Olivia Shamon, Hector Anadon Leon, Konrad Tollmar, Linus Gisslén:
Automatic Testing and Validation of Level of Detail Reductions Through Supervised Learning. 191-198 - Colan F. Biemer, Seth Cooper:
On Linking Level Segments. 199-205 - Elliot Doe, Mark H. M. Winands, Dennis J. N. J. Soemers, Cameron Browne:
Combining Monte-Carlo Tree Search with Proof-Number Search. 206-212 - Sumaira Erum Zaib, Masayuki Yamamura:
Using Heart Rate and Machine Learning for VR Horror Game Personalization. 213-220 - Miklos Kepes, Nicholas Guttenberg, Lisa B. Soros:
ChemGrid: An Open-Ended Benchmark Domain for an Open-Ended Learner. 221-228 - Jitao Wang, Dongyun Xue, Jian Zhao, Wengang Zhou, Houqiang Li:
Mastering the Game of 3v3 Snakes with Rule-Enhanced Multi-Agent Reinforcement Learning. 229-236 - Guoqing Liu, Mengzhang Cai, Li Zhao, Tao Qin, Adrian Brown, Jimmy Bischoff, Tie-Yan Liu:
Inspector: Pixel-Based Automated Game Testing via Exploration, Detection, and Investigation. 237-244 - Hidde Bolijn, Martin Li, Andries Reurink, Cas van Rijn, Rafael Bidarra:
Benni's Forest - a serious game on the challenges of reforestation. 245-252 - Marco Pleines, Konstantin Ramthun, Yannik Wegener, Hendrik Meyer, Matthias Pallasch, Sebastian Prior, Jannik Drögemüller, Leon Büttinghaus, Thilo Röthemeyer, Alexander Kaschwig, Oliver Chmurzynski, Frederik Rohkrähmer, Roman Kalkreuth, Frank Zimmer, Mike Preuss:
On the Verge of Solving Rocket League using Deep Reinforcement Learning and Sim-to-sim Transfer. 253-260 - Yubin Wang, Yifeng Sun, Jiang Wu, Hao Hu, Zhiqiang Wu, Weigui Huang:
Reinforcement Learning using Reward Expectations in Scenarios with Aleatoric Uncertainties. 261-267 - Ruben Band, Maarten Lips, Julivius Prawira, Jurgen van Schagen, Simon Tulling, Ying Zhang, Aicha A. Benaiss, Ineke J. M. van der Ham, Mijael Bueno, Rafael Bidarra:
Training and assessing perspective taking through A Hole New Perspective. 268-275 - In-Chang Baek, Taegwan Ha, TaeHwa Park, Kyung-Joong Kim:
Toward Cooperative Level Generation in Multiplayer Games: A User Study in Overcooked! 276-283 - Yifan Gao:
PGD: A Large-scale Professional Go Dataset for Data-driven Analytics. 284-291 - James Goodman, Diego Perez Liebana, Simon Lucas:
MultiTree MCTS in Tabletop Games. 292-299 - Marko Tot, Michelangelo Conserva, Diego Perez Liebana, Sam Devlin:
Turning Zeroes into Non-Zeroes: Sample Efficient Exploration with Monte Carlo Graph Search. 300-306 - Jonas Schumacher, Marco Pleines:
Improving Bidding and Playing Strategies in the Trick-Taking game Wizard using Deep Q-Networks. 307-314 - Fanyu Zeng, Guangyu Xing, Guang Han:
Memory-Augmented Episodic Value Network. 315-321 - Peng Tian, Wenfei Lan, Xiao Zhang:
Hero featured learning algorithm for winning rate prediction of Honor of Kings. 322-329 - Daniel Lutalo:
Agent X: Improving Exploration vs Exploitation in the State of the Art Angry Birds AI. 330-337 - Lijuan Duan, Shuxin Li, Wenbo Zhang, Wenjian Wang:
MOBA Game Item Recommendation via Relation-aware Graph Attention Network. 338-344 - Jinqiu Li, Shuang Wu, Haobo Fu, Qiang Fu, Enmin Zhao, Junliang Xing:
Speedup Training Artificial Intelligence for Mahjong via Reward Variance Reduction. 345-352 - Martin Balla, Diego Perez Liebana:
Task Relabelling for Multi-task Transfer using Successor Features. 353-360 - Per-Arne Andersen, Morten Goodwin, Ole-Christoffer Granmo:
CaiRL: A High-Performance Reinforcement Learning Environment Toolkit. 361-368 - Linjie Xu, Jorge Hurtado Grueso, Dominic Jeurissen, Diego Perez Liebana, Alexander Dockhorn:
Elastic Monte Carlo Tree Search with State Abstraction for Strategy Game Playing. 369-376 - Steve Bakos, Heidar Davoudi:
Mitigating Cowardice for Reinforcement Learning Agents in Combat Scenarios. 377-384 - Miguel González Duque, Rasmus Berg Palm, Søren Hauberg, Sebastian Risi:
Mario Plays on a Manifold: Generating Functional Content in Latent Space through Differential Geometry. 385-392 - Branden Ingram, Benjamin Rosman, Clint J. van Alten, Richard Klein:
Play-style Identification through Deep Unsupervised Clustering of Trajectories. 393-400 - Xiaoyun Feng:
Multi-goal Reinforcement Learning via Exploring Successor Matching. 401-408 - Mattia Colombo, Alan Dolhasz, Jason Hockman, Carlo Harvey:
Acoustic Rendering Based on Geometry Reduction and Acoustic Material Classification. 409-416 - Denise Angilica, Giovambattista Ianni, Francesco Pacenza:
Declarative AI design in Unity using Answer Set Programming. 417-424 - Shi Johnson-Bey, Mark J. Nelson, Michael Mateas:
Neighborly: A Sandbox for Simulation-based Emergent Narrative. 425-432 - Stephen G. Ware, Rachelyn Farrell:
Salience as a Narrative Planning Step Cost Function. 433-440 - Munir Makhmutov, Joseph Alexander Brown, Maksim Surkov, Anton Timchenko, Kamilya Timchenko:
Adaptive Game Soundtrack Tempo Based on Players' Actions. 441-448 - Alexander J. Bisberg, Emilio Ferrara:
GCN-WP - Semi-Supervised Graph Convolutional Networks for Win Prediction in Esports. 449-456 - Jiale Xu, Jian Hu, Shixian Wang, Xuyang Yang, Wancheng Ni:
MiaoSuan Wargame: A Multi-Mode Integrated Platform for Imperfect Information Game. 457-464 - Chao-Lin Liu:
Using Wordle for Learning to Design and Compare Strategies. 465-472 - Xiangyong Chen, Donglin Wang, Feng Zhao, Ming Guo, Jianlong Qiu:
A Viewpoint on Construction of Networked Model of Event-triggered Hybrid Dynamic Games. 473-477 - Ibrahim Khan, Thai Van Nguyen, Xincheng Dai, Ruck Thawonmas:
DareFightingICE Competition: A Fighting Game Sound Design and AI Competition. 478-485 - Chen Chen, Tomoyuki Kaneko:
Learning Strategies for Imperfect Information Board Games Using Depth-Limited Counterfactual Regret Minimization and Belief State. 486-493 - Alexandre Berthault, Camille Richard, Mathieu Anthoine, Florian Wolf, Maël Addoum:
Subtle Attention Guidance for a New Virtual Reality Game. 494-495 - Tairan Huang, Xu Li, Hao Li, Mingming Sun, Ping Li:
CGAR: Critic Guided Action Redistribution in Reinforcement Leaning. 496-499 - Roberto Gallotta, Kai Arulkumaran, Lisa B. Soros:
Surrogate Infeasible Fitness Acquirement FI-2Pop for Procedural Content Generation. 500-503 - Sotetsu Koyamada, Keigo Habara, Nao Goto, Shinri Okano, Soichiro Nishimori, Shin Ishii:
Mjx: A framework for Mahjong AI research. 504-507 - Miguel Ángel García-Ruíz, Pedro C. Santana-Mancilla, Laura S. Gaytán-Lugo, Raúl Teodoro Aquino Santos:
Smelling on the Edge: Using Fuzzy Logic in Edge Computing to Control an Olfactory Display in a Video Game. 508-511 - Ayesha Siddika Nipu, Siming Liu, Anthony Harris:
MAIDCRL: Semi-centralized Multi-Agent Influence Dense-CNN Reinforcement Learning. 512-515 - Mingxi Tan, Andong Tian, Ludovic Denoyer:
Regularized Soft Actor-Critic for Behavior Transfer Learning. 516-519 - Benedict Wilkins, Kostas Stathis:
World of Bugs: A Platform for Automated Bug Detection in 3D Video Games. 520-523 - Xiangyu Zhao, Sean B. Holden:
Towards a Competitive 3-Player Mahjong AI using Deep Reinforcement Learning. 524-527 - Sebastian Oberdörfer, Philipp Krop, Samantha Straka, Silke Grafe, Marc Erich Latoschik:
Fly My Little Dragon: Using AR to Learn Geometry. 528-531 - Maël Addoum, Yannick Bourquin, Quentin Bleuse, Auriane Gros, Jean Breaud, Marilou Serris, Philippe Robert:
An Interactive Module for Learning and Evaluating the Basic Rules in Health Consultations. 532-535 - Haihan Duan, Yiwei Huang, Yifan Zhao, Zhen Huang, Wei Cai:
User-Generated Content and Editors in Video Games: Survey and Vision. 536-543 - Ian Colbert, Mehdi Saeedi:
Evaluating Navigation Behavior of Agents in Games using Non-Parametric Statistics. 544-547 - Branden Ingram, Benjamin Rosman, Clint J. van Alten, Richard Klein:
Improved Action Prediction through Multiple Model Processing of Player Trajectories. 548-551 - Raluca D. Gaina, Martin Balla:
TAG: Pandemic Competition. 552-559 - Michael Cook:
Optimists at Heart: Why Do We Research Game AI? 560-567 - In-Chang Baek, TaeHwa Park, Taegwan Ha, Kyung-Joong Kim:
Turing Test Framework for Cooperative Games. 568-569 - Megan Charity, Julian Togelius:
Keke AI Competition: Solving puzzle levels in a dynamically changing mechanic space. 570-575 - Zhiyuan Yao, Tianyu Shi, Site Li, Yiting Xie, Yuanyuan Qin, Xiongjie Xie, Huan Lu, Yan Zhang:
Towards Modern Card Games with Large-Scale Action Spaces Through Action Representation. 576-579 - Lars Wagner, Christopher Olson, Alexander Dockhorn:
Generalizations of Steering - A Modular Design. 580-583 - Febri Abdullah, Mury F. Dewantoro, Ruck Thawonmas, Fitra Abdurrachman Bachtiar:
Science Birds Gameplay With a Smile Interface to Promote the Spectator's Emotion. 584-585 - Jiahao Li, Ke Fang, Wai Kin Victor Chan:
Parallel Dance: A Social Game on Campus Public Screens. 586-587 - Albertus Agung, Roman Savchyn, Pujana Paliyawan, Ruck Thawonmas:
Cute Helper: A Study on the Effect of Virtual Character Expressions on Players' Engagement in a Game for Collecting Artwork Descriptions. 588-589 - Jakub Dakowski, Piotr Jaworski, Waldemar Wojna:
Quick generation of crosswords using concatenation. 590-593 - Moinak Ghoshal, Juan Ong, Hearan Won, Dimitrios Koutsonikolas, Caglar Yildirim:
Co-located Immersive Gaming: A Comparison between Augmented and Virtual Reality. 594-597 - Jiahao Li, Ke Fang, Xing Sun, Zhouyi Li, Xinyang Wen, Wai Kin Victor Chan:
Gulliver's Game: Multiviewer and Vtuber Extreme Asymmetric Game. 598-599 - Maximilian Hünemörder, Mirjam Raffaela Bayer, Nadine Sarah Schüler, Peer Kröger:
Stirring the Pot - Teaching Reinforcement Learning Agents a "Push-Your-Luck" board game. 600-603 - Cameron Browne:
Quickly Detecting Skill Trace in Games. 604-607 - Timo Bertram, Johannes Fürnkranz, Martin Müller:
Supervised and Reinforcement Learning from Observations in Reconnaissance Blind Chess. 608-611 - Chu-Hsuan Hsueh, Kokolo Ikeda:
Playing Good-Quality Games with Weak Players by Combining Programs with Different Roles. 612-615 - Luntong Li, Zhiming Zhou, Jiajun Chai, Zhen Liu, Yuanheng Zhu, Jianqiang Yi:
Learning Continuous 3-DoF Air-to-Air Close-in Combat Strategy using Proximal Policy Optimization. 616-619 - Anurag Sarkar, Seth Cooper:
Ordering Levels in Human Computation Games using Playtraces and Level Structure. 620-623 - Takuto Itoi, Edgar Simo-Serra:
PIFE: Permutation Invariant Feature Extractor for Danmaku Games. 624-627 - Wang Weikai, Kiminori Matsuzaki:
Improving DNN-based 2048 Players with Global Embedding. 628-631 - Thai Van Nguyen, Xincheng Dai, Ibrahim Khan, Ruck Thawonmas, Hai V. Pham:
A Deep Reinforcement Learning Blind AI in DareFightingICE. 632-637 - Solange Margarido, Penousal Machado, Licínio Roque, Pedro Martins:
Let's Make Games Together: Explainability in Mixed-initiative Co-creative Game Design. 638-645
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.