All posts by bojan.stankovic

Programming the game: Part 2

Showing the Map of the Level

To avoid a player feeling lost and not knowing where to go, the game will feature a level map. The map is displayed once the player points the device towards the floor. The exact details of the map’s interactivity abilities are not available at this point, however, while at the map screen, the player might be able to change some settings on-the-fly.

Main Menu

The Main Menu that is currently implemented into the game is mostly a placeholder and it will be replaced/improved with the functionality for logging into a user’s account.

Artificial Intelligence for Traffic

AI required a lot of planning and is still a work in progress, however, it is nearly finished. The traffic system is supposed to control all the vehicles in the game as well as traffic lights. Vehicles are intended to react to each other as well as to traffic lights and traffic signs. Naturally, a single vehicle has to be able to detect when the player is in front of it. At the moment, most of these requirements are implemented, with the exception of the traffic lights/signs. This means that a vehicle can accelerate, drive or slow down (when there is another vehicle in front or if it is about to come into a sharp corner). The vehicle can also follow the road as intended by the design of the world.

Programming the Game: Part 1

Introduction

Initially, we needed to decide which game engine to use for development. The decision had to be made between Unreal Engine 4 and Unity 3D. Unreal Engine 4 was chosen for the project mainly due to its ability to develop very specific parts of the game. For example custom camera control using mobile device sensors and later custom data tracking for analytics purposes. This doesn’t mean that the Unity engine would not enable us to do the same thing, but the game programmer has more experience with Unreal Engine 4 and C++.

Controlling the Camera

We wanted to have as much control over the camera orientation as possible in order to store the orientation information for later analysis. Originally, the camera was controlled using the raw sensor data from the mobile device. However, the noise of the data that was generated was very high and it caused a jittery effect in orientating the camera. Additionally, there were inflection points in the data that would cause impulses in the camera’s orientation values. At the moment, reducing the noise and removing the inflection points are still under development and a temporary approach of using the GoogleVR plugin has been put in place which solves both of these issues at the cost of not having full control over the camera.

Player Character Movement

With the camera control in place and attached to a player’s character, the movement through the world needed to be implemented. Input for movement control has to be done through touch gestures (e.g. swipe, tap, double tap, etc.). At the moment there are two different approaches for movement locomotion:

  • Walk and run ability by using the user interface buttons (implemented)
  • Teleport to the targeted location using the safe-point system (in development)

In our next post we’ll cover other aspects of the early development of the game, including the artificial intelligence implemented for traffic.