Non-Player-Characters (NPCs), as found in computer games, can be modelled as intelligent systems, which serve to improve the interactivity and playability of the games. Although reinforcement learning (RL) has been a promising approach to creating the behavior models of non-player characters (NPC), an initial stage of exploration and low performance is typically required. On the other hand, imitative learning (IL) is an effective approach to pre-building a NPC’s behavior model by observing the opponent’s actions, but learning by imitation limits the agent’s performance to that of its opponents. In view of their complementary strengths, this paper proposes a computational model unifying the two learning paradigms based on a class of self-organizing neural networks called Fusion Architecture for Learning and COgnition (FALCON). Specifically, two hybrid learning strategies, known as the Dual-Stage Learning (DSL) and the Mixed Model Learning (MML), are presented to realize the integration of the two distinct learning paradigms in one framework. The DSL and MML strategies have been applied to creating autonomous non-player characters (NPCs) in a first person shooting game named Unreal Tournament. Our experiments show that both DSL and MML are effective in producing NPCs with faster learning speed and better combat performance comparing with those built by traditional RL and IL methods. The proposed hybrid learning strategies thus provide an efficient method to building intelligent NPC agents in games and pave the way towards building autonomous expert and intelligent systems for other applications.