Introduction
Initially, we needed to decide which game engine to use for development. The decision had to be made between Unreal Engine 4 and Unity 3D. Unreal Engine 4 was chosen for the project mainly due to its ability to develop very specific parts of the game. For example custom camera control using mobile device sensors and later custom data tracking for analytics purposes. This doesn’t mean that the Unity engine would not enable us to do the same thing, but the game programmer has more experience with Unreal Engine 4 and C++.
Controlling the Camera
We wanted to have as much control over the camera orientation as possible in order to store the orientation information for later analysis. Originally, the camera was controlled using the raw sensor data from the mobile device. However, the noise of the data that was generated was very high and it caused a jittery effect in orientating the camera. Additionally, there were inflection points in the data that would cause impulses in the camera’s orientation values. At the moment, reducing the noise and removing the inflection points are still under development and a temporary approach of using the GoogleVR plugin has been put in place which solves both of these issues at the cost of not having full control over the camera.
Player Character Movement
With the camera control in place and attached to a player’s character, the movement through the world needed to be implemented. Input for movement control has to be done through touch gestures (e.g. swipe, tap, double tap, etc.). At the moment there are two different approaches for movement locomotion:
- Walk and run ability by using the user interface buttons (implemented)
- Teleport to the targeted location using the safe-point system (in development)
In our next post we’ll cover other aspects of the early development of the game, including the artificial intelligence implemented for traffic.