top of page

SCREEN:CAPTURE

First-Person Puzzle Horror
Technical Game Designer
January 2024 - May 2024
Made in Unreal 5

About

SCREEN:CAPTURE is an Unreal Engine 5 game made as the singular deliverable for CTIN489, a class at USC. With my partner for this class we created a first-person puzzle horror game over the course of the semester learning more about production, design, and engineering. The game takes advantage of the affordances of a camera as the main way for the player to interact with the world. Sprinkle in some horror elements with escaped lifeforms that you have to capture but are also trying to kill you and you have SCREEN:CAPTURE

Highlights

  • Designed and programmed the main gameplay mechanic of the Capture Camera allowing players to interact with their surroundings in unique ways

  • Implemented a stationary enemy AI that attacks sounds in it's vicinity

  • Create a template AI for a NPC that runs away and hides from the player

  • Greyboxed level layouts to enforce a fear of what's around the corner

  • Scripted player locomotion mechanics to emulate the aspects of realistic movement that make a game immersive

  • Engineered a Save/Load system that saves complex object states, event system information, player progress, and a variety of other information

  • Created a diegetic UI on an in-game computer screen that players can interact with and support a variety of different UI elements

  • A ton of other engineering systems that you can scroll down and see!!!

Capture Camera

The Capture Camera is the game mechanic that SCREEN:CAPTURE is built around. It acts as a camera that players can use to "capture" objects from the world and also allows players to place these captured objects into the world.

FinalCameraIteration.gif

GIF of Capture Camera Functionality

  • Highlights
    • Accurate checking if objects were in view of the camera using masks generated at runtime 

    • Algorithm to determine position to place objects as well as validity of placements

    • Preview window to see stored objects in the camera

    PreviewWindow.png

    Preview Window that can store objects including even actors that can move!

    Details
    • The camera works in 3 stages. The first stage is starting the capture where the player has the camera focused on an object. The second stage is holding the capture where the player must keep the object focused while a bar fills up in increments. If the object doesn't leave the camera's screen the player is able to let go and the object gets stored inside the camera.

      Many look at this description and immediately look towards dot products as the solution, they'll implement it and call it a day. However, camera views don't start from a single point they are a frustrum and start from a rectangle. So a simple dot product would have a hard time accurately telling if an object was actually in frame on the edges of the camera. This was very important to us though as we wanted the camera to feel as good as possible and reduce user friction.

      There was two solutions to this problem. I could either determine the camera frustrum overlap with the object's bounding box or use a post process material that would give a black and white mask of the world. I chose the second option mostly due to the fact that I wanted to learn more about materials and I already had an inkling of how to create a mask given some of the shaders I created for highlighting capturable objects.

      image.png

      I will not get into the inner workings of the material itself but basically if the material is applied to a rendering camera it will give a black and white mask. The white parts would be the object and the black would be anything that is not capturable. So for example if a wall was blocking part of an capturable object it would show up in the mask. This allowed us to determine if an object was in frame or not by just checking the percentage of the mask that was white. Even better we didn't have to run the texture it was drawing too at screen resolution. Instead, we shrinked the texture to 32x32 because fidelty didn't affect the percentages at all and it became a very cheap texture to draw too a well as read from!

      With this simple texture we were able to check if any object was in view of the capture camera. We did still run dot product calculations to gate whether it did the mask check or not as reading/writing a texture still is still more expensive than dot product. Other than that, this solution worked great and was able to accurately work in gameplay.

    • Placing objects was more involved than I expected. I thought a simple linecast would have been good enough to check which direction the player was pointing and then place the object there. Before we talk about the final algorithm let's walk through how I got there through some scenarios.

      Scenario1Diagramt.png

      The first scenario was probably the least common but something I wanted to account for. With a simple linecast if the player was looking between a gap in a wall the camera would be showing an invalid object placement. We do have a box overlap that does check if the position of an object is valid and it would catch and stop the player from actually placing an object inside that gap. However, we don't even want the object to be placed there we want the object to be shown at the desired position instead. This was a really easy fix though as we could just change from a line cast to a boxcast and have the boxcast be the size of the object. However, it did work in this scenario but not the next one. 

      Scenario2Diagram.png

      The boxcast was a good solution until the player was standing on the edge of something like the diagram above. If the player wanted to place an object on the floor below; the boxcast would detect an overlap and instead place it right dab in front of their face. When I first discovered this it felt really unwieldly and frustrating to use sooooo it had to change. Instead of this we combined a single line trace with a boxcast trace.

      Scenario3Diagram.png

      We linecast towards the furthest position the player is looking and then we look backwards with a multi boxcast checking to see the first valid position we can place the box. There is technically a bit more complexity as we have to handle a few edge cases but this is a good enough overview of the system.

  • The Capture Camera went through many iterations to get to where it is now. Even now there are many improvements that I would like to make to it. The Capture Camera first started out as a brainstorm figuring out what's the best way for players to interact with the lifeforms (the main antagonists). It was originally going to be like a literal screenshot tool that you see on a computer, however, we pivoted decided that this wouldn't work as well in 3D. Looking back on that I don't think that was the case and we were too close-minded and should have considered it way more seriously. We settled on a more traditional camera as the way for the player to interact with lifeforms. 

    From there the design question became how player's would actually trap lifeforms in this camera. Eventually we decided on having the object be also the tool that players would use to trap lifeforms by physically being able to interact with certain parts of the world with the camera. You can see one of the first prototypes right here.

    OriginalCameraItemMode.gif

    We'll talk about the fundamental concept of a typical camera being flawed later but the capture camera first iteration and all following iterations was a fight to make it very clear to players. At first the capture camera had two separate modes that players could switch between where one would interact with objects and one would interact with lifeforms. Further playtesting showed that player's didn't really get this interaction that well. We tried tutorializing it a tiny bit but still to no success showing that player's could not understand that lifeforms and other objects were different. From there we combined the two modes together. Capturing lifeforms and storing items was done through the same interface which could also be seen in the final iteration.

    FinalCameraIteration.gif

    We also made all objects that the camera could interact with need to be "charged" to capture. Basically the player had to hold the object in frame for a specific amount of time before the object was captured. This did make it slower for players to capture objects but allowed us to make the distinction between objects and lifeforms as small as possible so players didn't get confused. This also allowed us the possibility for moments where the player would have an impending threat but would have to wait for a capture to go off.

    There is also a ton of other iterations that we went though such as adding a battery charge to the camera to combat players having the camera too out and then removing it when realizing there are better ways to incentivize the player to keep the camera away that don't require extra level design work to support.

    You might notice the entire time I didn't mention the fact about how the camera aligned with the goals of the game. This is where SCREEN:CAPTURE suffered the most is that we had this experience goal of "Otherworldy sense of unease" and no matter your opinion on the strength of this goal the true problem was that we didn't follow this goal strongly. We were very wishy washy and designed because it "made sense" in our imagination rather than desiging around this core experience and then playtesting it to see if it made sense. If I were to redo this game that would be the biggest thing I would do is stepping my foot down on myself and focusing on designing around experience goals and design pillars so much more stronger than we ended up doing.

Player Movement

Player Movement in horror games is very hard to get correct as it feels like something that is really easy to overlook. People always just say just make it immersive but what does that mean? For our game it meant adding in movement that told a story. The character in SCREEN:CAPTURE isn't some olympic athlete, they are a researcher with the most average of builds and our movement in SCREEN:CAPTURE is modelled after this fact.

    • Player slows down when landing from a small jump

    • Impact Landing from higher heights that stops player movement and triggers camera animations depending on height

    • Designing around the character being an average joe

    ImpactLanding.gif

    A player when falling from a high enough height will trigger an impact landing. An impact landing causes the player to lose control of their character as the character recovers from the impact. While this may seem like a frivolous mechanic it serves as a reminder to the player that they are an average person.

    SlowDownWhileJumping.gif

    Even when landing from a small jump the players will take a tiny bit of impact and actually slow down a little though they do not lose control of their character. This helps keep the player grounded in the fact that they are just a person as this mechanic prevents heavy bunny hopping usage which usually gives the impression that the character is freakishly athetlic when conducted.

    • Smooth Lerping between crouch and uncrouch, also doesn't break when spammed

    • Realistic player camera alighment when shrinking capsule collision

    BetterCrouch.gif

    Crouching has been a staple in horror games for as long as I remember. It serves as a multifacted tool that can be both used to make the player feel safe while hiding under a table or make the player fear for their life while crawling through a dark unexplored tunnel. In SCREEN:CAPTURE I wanted to capture a feeling of safety with the crouch. One of the biggest things with this was adding a smooth lerp to crouching which adds a lot of comfort to a crouch and the gradual but still fast movement gives player's relief

    • Realistic Footstep Sounds intervals that support the transitions between all the different movement states.

    In all honesty I should have designed footsteps with a lot more intention than what ended up happening. It was just a thing that was put into the game because well other horror games do it and there wasn't much thought outside of that. One of the lessons learned from making this game is to figure out why other games in the same genre are doing something and then figure what that contributes to your game.

    The engineering of footstep sounds is a lot more interesting though compared to the design I did.

    Many games just use a timer for footsteps and this works out good enough. If the player is running then make it so that the timer runs more frequently and wham bam you have something that works good enough. However, in a genre where your footsteps are louder on average compared to other games unusual patterns in footstep sounds happen especially when transitioning from different movement states. For example, should one reset the timer completely when transitioning from sprinting to walking. There might be a weird gap though so maybe we somehow convert the sprinting timer into a walking timer. It's really complicated and way too much to juggle with as it can get out of hand really quickly once you add crouching and other movement states. 

    Instead of using timers I instead use a distance that the player moved since the last step sound. Each movement state that triggers a step sound has it's own "max distance" that triggers a step and each time the distance reached that value it plays that corresponding movement state's step sound. This allows really easy transitions between movement states as we don't have to guess timer conversions and instead just check the distance since the last step to a constant value.

Level Design

While the Capture Camera was not something I was super happy about, Level design for this game was something I was very proud of even if it could have been better in many areas.

The level design I was most proud of was the starting hallway which acted as a tutorial to basic movement mechanics but also was designed to set the tone for the game as well.

StartingTutorialAreaLevelLayout_edited.jpg

2D Level Layout

image.png

3D Greybox

FinalStartingHallway.gif

Final Iteration

One of the biggest goals when developing this starting hallway was really honing in not just on tutorializing the player but really digging deep into the player not too early but early enough to set a tone. The starting hallway has a bit of windup with a starting passageway that is pictured offscreen on the greybox. Eventually though the player gets to the end of that hallway and is met with the clutter of the main hallway enforcing contrast and spiking the player's danger senses.

There is no real danger in this hallway though and instead serves as a reinforcement for the player and also helps guide player flow to the end as players will see the terminal at the end of the area and will want to reach it to investigate.

  • SwatchletBasement_edited.jpg
    CameraRoomLevelLayout.jpg
    SwatchletOffice_edited.jpg
  • FreakRoomGreybox.png
    FirstBrokenRoomGreybox.png

    One of my biggest regrets is not documenting this project as properly as I should have. There should be so many more screenshots of greyboxes.

Enemy AI

Highlights
  • Template AI Behavior with EQS and Behavior Trees for partner to build on with further behavior and functionality

  • Hostile stationary enemy AI that reacts to sound

Details

My partner was the one in charge of AI design, however, I still contributed both design wise and engineering wise. One of the lifeforms(enemies) that my partner wanted to put in was called the Swatchlet which acted as a type of starter creature.

SwatchletRunAway.gif

I built a template for my partner that they could iterate on and improve. It consists of multiple EQS, Environment Query System, checks such as finding the best location to move too away from the player and findig the best pot to move too as well. I also created a template Behavior tree for my partner to build upon as well with functionality for the Swatchlet to be scared by the player and "patrol" around.

The other Enemy AI stuff I did was more design orientated as my partner and I realized halfway through development that the Swatchlet was going to be less of a hostile lifeform and that would ruin the vibe of the horror game. From there we decided to add a second type of lifeform that would be actively looking to kill the player. This became known as the tympanic mold which is a stationary enemy that listens out to sounds and attacks when it hears them.

TympanicMold.gif

Tympanic Mold Striking

  • Reacts to sound a few times before striking out and pulling anything in it touches.

  • Soundwave VFX was added in at a later point when players struggled to understand that their footsteps were causing the tympanic mold to strike

The Tympanic Mold was an enemy that did succeed in one of it's primary goals of scaring the player and establishing a visible threat that players had to be on guard for. However, it didn't contribute anything to the gameplay that was useful in any sort of way besides that. Instead, it confused players as I didn't factor into account that players would consider the Tympanic Mold catchable as we designed it thinking that players wouldn't consider the tentacles the main body and try to capture it. We learned though through playtesting that players kept trying to capture the tentacle but we learned that information before it was too late for us to put any mechanic that allows the player to capture the tentacle. Biggest takeaway is playtest as often as you can no matter with who so you avoid problems like this and can plan further ahead for them.

Save/Load Sytem

image.png
  • Customizable c++/Blueprints Save Component that can be attached to any Actor and be custom coded on how to save/load data onto the attached actor

  • GUID Generator that will automatically populate all save components in the world with a unique GUID that the game will use to save and load data onto the correct actors

  • Current Save System saves numerous pieces of data such as player progress, event states, objects stored in the camera, and object locations

UI Engineering

ComputerTerminal.gif
  • Diegetic UI Computer screen that takes mouse inputs and transfers it onto the in-game computer terminal that players can interact with.

  • Built right into Unreal's UMG system so you don't have to code any new UI elements or editors and everything is built right in.

image.png
  • Keybind remapper that allows players to remap any key they want. None of this is hardcoded so you can just give the system the actions that need to be binded too and give the player the chance to bind them to anything they want.

  • Keybind texture loader system also allows attachments of textures to each specific key so in every instance of the "key" the texture is used instead

KeybindTextureUpdate.gif
  • Keybind remapper will also dynamically reload keybind prompts in-game if keybinds are changed

Game Event System

  • Customizable system that takes the idea of the observer pattern and applies it to create a system that takes directional, overlap, and a variety of other triggers and the ability to connect them to events

  • Could interface with the save/load to create cutscenes or events that would not run after a player saves

  • Used extensively in the project to create simple audio cues to more complex cutscenes

Design Postmortem

SCREEN:CAPTURE was a very good concept and still is in my opinion, however, the execution was something that I would like to improve on a lot more the next I'm on a project. I'm not going to talk about everything my partner and I talked about during our postmortem because it would be like a 5 page essay. Instead I'm going to talk about the one flaw that ended up causing most of if not all our other flaws and that's all to do with the design vision of the game.

The design vision for this game was weak basically throughout the entire development of the game. I don't mean that the concept itself was weak, I believe it to be very strong if executed properly. We even came up with an experience goal and design pillars but our problem was twofold. Our design pillars were not that strong as we didn't have a good grasp on how to make them just yet but more importantly we didn't follow the design vision that we came up with. Even a lukewarm design vision, if followed strictly, can lead to good results and in our case we didn't. 

In future projects I want to be way more clear and strict not just with others but more importantly with myself on asking how does this feature follow the design vision for the game. And I want to take this to the next level where I set goals for individual features that align with the design vision for the game; so basically when I'm designing I'll know the success of my design by playtesting and checking the results against the goal of the feature.

bottom of page