Dev Log #5 – Cutting Content

My plan for this week’s post was to write an account of our play testing we had earlier, but my body decided that wasn’t going to happen and went ahead and got sick so here we are. I don’t think it would be fair to write about something that happened without me so this week I want to talk a bit about a feature I fully designed that ended up getting cut from the final game. Why? I’ll talk about that later.

Booyo Park is designed to be an open-ended experience, allowing players to decide what they want to do and how. There are no explicit goals, objectives, or missions in the game; only a handful of rules and interactions that handle how game objects behave and interact. This is why the team refers to the project as a “virtual petting zoo” and not really a game. However, this doesn’t mean that players don’t want goals to try an accomplish, as that’s one of the most common pieces of feedback we get during playtests.

Earlier in the project, we decided on the open-ended design of the experience after trying and failing a couple of mechanic heavy concepts. The issue we had was that we would have to teach how to accomplish them in a very short amount of time in a completely new environment, mixed reality. We were given advice from our contact in Hong Kong that we should focus more on an experience rather than a goal, which is what we set out to do. Here we are now, and the most common thing we get from players is they want a specific goal to work towards or some sort of player agency.

This had us a bit stuck. We originally had a laundry list of interactions the player could have with a Booyo but this was all in service of the open ended nature of the experience. By implenting a goal it breaks our design wide-apart. We knew that if we were going to implement a new system, it would have to work hand-in-hand with our current design instead of against it. We hashed out some ideas, came up with a new character (who was also shortly scrapped) and came upon the idea of colour mixing.

Most of the interactions, with the expectation of resetting, are all natural gestures that are based in real world interactions. If a player grabs a Booyo, then the Booyo in turn would follow the player’s hand while they move it around screen. One idea that we ran with was mixing Booyo colours when you merge them together. Most people are exposed to how colours mix when they are very young. However we knew it couldn’t work exactly how it usually does as eventually players would end up with a gross, muddy colour, so I came up with a new mixing system.

It’s more or less your standard mixing system, mix two primary colours and you get a secondary (red + blue = purple), but now if you mix a secondary and a primary, you get that primary back (red + purple = red), mixing a primary that didn’t make up the secondary will get black (purple + green = black, more on that later), and mixing two secondaries will get the primary they both share (green + purple = blue). Black works like a cloning colour and will take on any colour that mixes with it. I wrote up how the system would work in our game design document and followed it up with a quick table to show how the combinations would work

As you can see, all the pieces were in place, the mechanic was all set to go. All that was needed to go was to actually implement the feature into the game. Two weeks later, it was scrapped.

As the person behind the system of the feature (which mind you, aren’t all that complex), you’d assume that I’d be upset that my precious “mechanic-baby” was denied access to the full game, yet it’s far from the case. I was the one that suggested we scrap the feature, so why?

First, and probably more logistical, is that by the time this mechanic was planned out, it was only a few weeks before the game was officially in gold state. Now was not the time to fix design challenges by throwing more mechanics and systems at it, it was time to reduce. Priorities fell on different tasks and this one didn’t hold nearly as much weight towards the final project so we scrapped it. Lastly, it just didn’t add anything to the core experience of the game. Sure it was a neat addition and definitely added more player agency to an otherwise open experience, but it took the focus away from interacting with the Booyos and now shifted the focus on finding all the colours. It felt like it worked separately from the rest of the game as opposed to working with it, so it was scrapped.

When arriving at design challenges, it’s important to remember that adding more mechanics is not always the answer. It’s often tempting to add things that might enhance, improve, or engage but what tends to happen is that adding mechanics can threaten a game’s scope or make a game feel a bit unfocused. Sometimes a better solution to a problem is taking away instead of just adding. The best part is that the mechanic ended up evolving into a planned design to have Booyo’s change colour based on where you find them, which addresses the initial design challenge we relieved back in September. The great thing about ideas being quick to produce is that they are also great to reuse.

Dev Log #4 – Designing Booyo Behaviour

Hey there! It’s been awhile, eh? Well to catch you up on what’s been going on we managed to figure out what our game title is going to be: Booyo Sitter! In it, you take care with living blob creatures (now called Booyos) as they play, float and interact with each other and the player. Since these creatures are giving off the impression that they are living, that means they have to have some sort of AI system to direct how they behave and how they react to each other and player interaction, and that’s what I’ve been working on for the past few weeks.

Boyoo Behaviour

As our team’s design lead, it’s my job to help communicate and direct how the game works to not just the team but also to our players. In this instance, I was making an easy to follow system that I can hand off to our tech team so that they can implement the functionality. First, I broke down the behaviours in terms of emotions that I knew a Booyo should have. They are as follows:

  1. Idle – starting state. State in which the Booyo floats around and waits for some sort of input from the player or another Booyo
  2. Joyful- happens when a Booyo merges or after it pops
  3. Surprised – happens when a Booyo is being held
  4. Scared – a transitional state that happens right before popping
  5. Wincing –  a transitional state that happens right before merging

Once the main emotions were figured out, I had to define the actions that would lead to that emotion. I wrote down a list of emotions and then a list of actions and started to come up with a diagram that I put through Viso. The end product looked like this:

So this wasn’t too bad but I knew that I could be a lot clearer in what I was trying to convey. I went to our artists and asked if it made sense to them, to which they replied that they didn’t know what these states would look like other than what the Booyo would look like. That’s when I realized that I was doing this backwards.

Fixing my Behaviour Tree

 

Typically AI states are treated as an action. For instance, a standard guard AI would probably have a Patroling, Searching, and Chasing states (maybe even a Fleeing state) with conditionals that would help navigate it through the behaviour tree. This way of treating AI behaviour makes it easy to understand what the game object is doing in a given state. Using the guard example again, it’s easy to know what that Patrolling means the guard is navigating space in the level in a pattern in an attempt to stop the player or NPC. However, using the behaviour tree I came up with if I said the Booyo is currently Scared, then it’s kind of vague as to what the Booyo is doing while Scared. Is it shivering? Is it moving? How long does it last?

I refactored my work and redefined my states to be action-oriented instead of emotions. The states I came up with were:

  1. Idle – Starting state, Booyo wanders around player
  2. Chirping – Booyo randomly will stop wandering to let out a little chirp
  3. Dancing – Transitional state that happens when two Booyos bump into each other without player interaction
  4. Grabbed – Booyo’s position will follow the player’s hand that grabbed it while in this state
  5. Thrown – Booyo is sent flying in the direction the player threw it at
  6. Merging – After Dancing, one Booyo will rapidly shrink while the other one rapidly grows, giving the idea of being “absorbed”. Which Booyo that is doing the absorbing is determined by Booyo size or by random if both Booyos are the same size.
  7. Popping – Once a Booyo has reached its maximum size, it will pop and release every Booyo that it absorbed.

Already I had way more states than when I started with but it’s already very clear how these things would behave during gameplay and how they correlate to each other. Following what I did last time, writing down the states and the actions required to get to each, I came up with a few variants of the state tree and came up with this final tree

Bringing it to the Art Team

 

Now I had something that showed off the logic of the behaviours really well, but now I needed something to convey what these states should look like. I asked my friend in another team what he would recommend and he said to do animatics, which are more or less rough storyboards to convey the progression and appearance. Perfect, exactly what I needed. I drew up a few sketches, threw them into a powerpoint and showed the rest of my team.

I’d like to wrap up with some things I’ve learned from this experience and what I think other designers can get from this post.

  1. Check in with the people you are communicating to. A big reason why I had to redo my AI tree was that I was working in a bubble for the most part. It was only when I showed my team that I noticed a big problem with its design. If the people you are communicating to is your audience, then test it. Game design is an iterative process because you’ll rarely find a solution your first go through.
  2. Treat AI states as actions. Seriously. Do it. Some people might tell you to only use words that end with -ing, but honestly, as long as you convey the action of what the AI is doing, that’s good enough. Even if the AI is not doing much physically, it helps frame the system a lot better and it makes it easier for other people to read.
  3. Don’t be afraid to use different or unconventional tools to help explain a design. At first glance, I dismissed animatics as an “animator only” tool. This couldn’t be farther from the point. Using the animatics, I was able to effectively communicate my direction to the art time so that they can have a frame to animate. This goes for any other tool or technique that might be viewed as niche or exclusive to a specific disciple. Branch out and try new stuff. If you have a weird thing you need to convey, then chances are there’s a tool that will help you convey it well.

What’s next?

Now that we have a clear idea of how Booyos behave, I can go into the engine and start to implement the logic. At the time of writing this, next week’s goal is to get the AI behaviour completely implemented into the game so that tech and art can come in and add the functionality and animations to bring these Booyos to life.

Dev Log #3 – Playtesting for feedback and usability in a Mixed Reality space

Hey you! So the last post I said that we reconsidered a lot of our original design choices and the process that led us to where we currently are at in the project. The project has a much more defined concept, which is:

A sandbox mixed reality pet sim where the player can interact with floating blobs by poking, petting, squishing and merging them.

Our game is built around the idea of open exploration: we don’t want to explicitly impose strict goals on the player. Instead, we want to present rules and interactions that the player can use to create their own goals. We currently have a pretty full list of possible interactions, but for the alpha, we only really want to test out three in particular: Picking up, Pushing/poking, and merging/splitting (I mean when  I put it out like that, it’s more like 5). The game is all about player experience, so we have to not only consider how these interactions will work but what kind of feedback it will produce when the player performs these actions. That’s where testing comes into play!

For the last few weeks, our narrative designer and production manager, Jen, and I have been performing focus group tests with people around the school. Our goal has been to help the other departments on our team get an idea of what direction we should go based on feedback from potential users. Essentially, we have filled a bit of a QA role these last few weeks instead of design, but we both decided that it should be fine considering that the bulk of the design work for the alpha has already been done and getting early user data would be beneficial.

Preliminary Designs by Yani Wang
Silhouette Table by Keana Almario and Yani Wang

So what did we subject the poor souls of Sheridan College to in the name of SCIENCE!?! Well, first we did preliminary art tests for our art department. We want the creatures the players are interacting with to facilitate what you can do to them. Balancing that with our goal to always present these creatures as being alive as opposed to objects ended up being tricky. Like, really. How do we get people to squish, push, and smoosh creatures if we tell them they are alive? Our art team, consisting of Keana and Yani, drew up and produced this to help test designs with:

Using the mock-ups, we took my now-deceased laptop (R.I.P Lappy) to the halls of Sheridan and asked them what they thought. During our first tests, we found that most people liked the blobbier, less animalistic designs from the first page, and found that people really liked N, C, and F for the second page. We took these results back to our art team so that they can use the feedback in further designs.

After that set of testing, me and Jen decided to do some “material testing”. I don’t think this could exist in any other environment than doing a mixed reality game, but we felt we needed to get good feedback to get a sense of how our blobs should respond to player interaction. In order to simulate these interactions, we bought objects that resembled the consistency and material of the blobs we wanted to make and took them to school and asked what people thought. These toys included some slime, silly putty, play-dough and a stress ball

And no, we didn’t add googly eyes while we were testing.

We found that the two most popular materials were the slime and silly putty as they were squishier and more fun to play with. Conversely, we found that the stress ball was the least favourite as it was harder to squish. Also, people mentioned they couldn’t really see a creature with the same type of consisentcy as the stress ball as they were tempted to throw it instead of pet it. We recorded the data and gave it to our art team for review.

Art board created by Keana Almario

Lasty, we took another art mock-up sheet and went out to ask students what they thought about the designs.

The mockup was split into rows and columns and we asked students to pick which blob they liked an why. The designs with the white boxes around them were the most popular choices, as they were the ones that gave the most character. Oddly enough, we found that the arms on the blob were very popular, which seemed to contradict the first test where people favoured the more amorphous design. In the end, we came up with a final-ish design for the creatures which looks like this:

Visual Mockup by Keana Almario

So this is all well and good, but you’re here for design? I like to think that testing is part of the design process. We want to make an engaging experience for players, and because of that, we have to be in tune of player reactions as often as we can throughout development. Also, as I said before, we wanted the blob’s design to facilitate how the player can interact with it, making it just as much of a design challenge as an art challenge. Now, where do we go from here? With the semester wrapping up, we don’t have much to do before the alpha comes out, but Jen and I would like to hit the ground running into the new year by prepping for a couple of challenges we know we’re going to face into the new year.

To start, we currently have an issue where if the blobs get too big, they can clip through the players head. This causes the blob to disappear when it gets too big and the player picks it up. We’ve already discussed some possible solutions, such as having the blob burst if you feed it too many other blobs, but that’s just one of the challenges we are foreseeing. Next post, I’ll be talking about how we approached solving these challenges as well as some other cool systems I helped design.

Devlog #2 – Designing virtual interactions in a physical space

Hey you! Last week, I talked about our design challenge and phrased it a bit like this:

How do you design an engaging and immersive game world using the real world around you?

This challenge guided us throughout our early production stages and gave us something to look back at if we found ourselves stuck on a particular problem.

One of these problems was arguably one of our more interesting features.

One concept that went pretty far into production was a monster hunting game similar to Keep Talking and Nobody Explodes. One player would be the hunter and could see the monster and attack it, while the other was the veteran and knew how to defeat the monster based on a manual we’d give them. We tested the concept and found success in making it fun an engaging, but we hit a practical problem; what’s stopping people from playing this game on their own. MR lets users see the game on top of the real world, so one issue we found was that the player fighting the monster can also just hold onto the manual and fight the monster. Another issue we ran into was that for the non-MR player, fighting a monster is more fun than reading a book so is it actually fun to be the non-MR player. The team knew we had to rethink things and
during one of our meetings, I proposed something to the team:

Instead of fighting the monster, what if you were caring for it?

Prior to this meeting, we had talked to our contact at Shadow Factory, Keiran Lovett, and he suggested we should focus on UX. We took this to this meeting and tried to step away from mechanics and goals. The intention was to make the game feel more like a sandbox experience where the focus in more on emergent gameplay as opposed to set rules and win conditions, which is when we landed on the monster pet care game idea. We eventually iterated it further so that the monster turned into multiple spirits, or will-o-wisps, that the player can interact with and play with.

Despite sounding pretty simple, this decision was actually incredibly difficult and came with its own risks. At this point, we were far into the semester with a playable alpha looming over us 6-7 weeks away (not including a week where the majority of the team would be in Montreal) and changing our game could prove to be incredibly risky.  We had a couple of arguments, long awkward silences, and debates over Naruto characters but in the end, we made the call and went forward with our plan to change our concept.

So was it a good call?

It’s certainly too early to say, but the project’s been making considerable progress from that meeting to the time I’m writing this dev log. From that point, we managed to do a bit more testing with the new concept similar to our early testing, including some digital prototypes like the one seen here.

View post on imgur.com

One early parameter we set for ourselves when designing for this game was:

The spirits need to feel like creatures and not objects

What this means is that the spirits need reactions to player interaction. We want to sell the player the idea that when they put on the headset, they are viewing an unseen world that exists on top of our own. In order for this narrative to work, the creatures that reside in this world need to feel alive and responsive. This parameter has helped our design processes and what interactions the player can take with the creatures. 

As I mentioned, our playtesting style is very similar to what we have been doing before. Testing this way is effective because our interface is the player. We need to design a game that makes interactions feel as natural as possible. By doing physical prototyping, we are bound to real-world concepts such as physical space and gravity, grounding our design ideas to them. With that being said,

How do you prototype ghosts?

As far as I know, ghosts aren’t real, and if they are I’m not sure how to acquire them for playtesting. However, balloons are real, and ribbons are real, so combine the two and you get:

Okay, so it’s not perfect, but it did get us the kind of movement we wanted for the wisps, which looks like this

Cool! Now we had an idea of what kind of game the player would be taking part in, the important question that would follow would be what interactions are fun to do with them?

Me and three other team members sat down and brainstormed a list of ideas we would like to explore, which I later transposed into a chart in our GDD that looks like this

Where we are now

Officially we are out of pre-production and production is in full force. We currently have some digital prototypes and we received equipment that allows us to have the freedom of movement we want, so we should be testing that very soon.

As for our next steps, we are looking at exploring more interactions with the wisps and producing digital prototypes we can test externally. We plan on going to malls and testing there. As of now, our artists are currently working on first passes on the creatures we have tested externally with students at the school.

Devlog #1- The Importance of Physical Prototyping for Mixed Reality Games

Mixed reality. The future of interactions, a brand new world coexisting on our physical world. Working with mixed reality (MR) sounds cool right? On paper, sure it does!

“You’re trying to tell me, that I can look at my actual hand and actual fire will come out of my actual hands actually? That actually sounds neat!”

Yet how do you design for this? Ah, less intriguing now right? It’s kind of like me asking you if you want a hot fudge sundae and then showing you a cow and an ice cream maker and saying “well, get to it then!” This has been the constant state of mind I’ve been in since we took on the challenge of creating an asymmetrical multiplayer MR game for the past 6 weeks. Wait, what- now there’s asymmetrical multiplayer involved? That wasn’t part of the deal, I want my money back! To help visualize the problem, I’ve restated it to something less abstract and something a bit more concrete:

How do you design an engaging and immersive game world using the real world around you?

YOU PROTOTYPE

To quote Nicole Lazzaro from her Matrix vs Pokemon GO GDC talk,

“[With VR games] The world itself is a genre, and the interaction with the world is the game.”

What she means is that the world defines the actions and goals in which the player engages in within it. Take a look at Rick and Morty: Virtual Rick-ality. In it, Owlchemy Labs recreated Rick’s lab in the show, and in their GDC talk, they mention that they aimed to be as accurate to the show as possible. Why? Because the genre is a simulation; the game is simulating what it would be like to be in the show, Rick and Morty, and the game is interacting with the world. This is why Owlchemy made sure that everything that looks interactable in the game is interactable. For example, there’s a door behind the player that would lead to the rest of the Smith household, but obviously creating a fully realized house would be a task. Yet having a doorknob that the player can’t actually interact with doesn’t feel right; it breaks immersion. So what did they do? They did this:

This kind of thinking is what I brought to the team when prototyping, but the issue being is, how does this kind of thinking translate into MR?

Okay so let me explain: with VR, you can create an environment the player can roam around in. With MR, you incorporate the game world into the real world the player can see. Early on, our team decided that if this game is going to be MR, then there has to be a reason, it can’t be arbitrary like being able to wave to your boyfriend while killing zombies (which might be the best Hallmark commercial ever conceived). So what would be the reason for MR? Well, let’s take a look at that Hallmark commercial again; what if you could play a game with someone who can’t see the world you can see? From there, we decided that we were going to develop an asymmetrical MR

“Get to the point, what did you do?”

Alright, so beforehand, we ran a few paper prototypes where we had one of our team members, Jen, don our makeshift MR headset to emulate what the setup would be like. We had her perform a few mechanics we wanted to test while recording it through the camera on her head. We found that while it was interesting to move around freely due to our equipment (essentially a computer backpack), but where we really started to find success was when another team member filled in as an “AI partner”. The consensus? Having someone work with/against you is a lot of fun

To test this out even further to isolate what we liked about the asymmetrical multiplayer, we came up with a concept for a red-light green-light game. For this, we had Jen try to steal an object while another team member, Justin, tried to catch her in the act. After some iterations involving spotlights, lasers, and our best alarm impressions, we solidified asymmetrical gameplay as one of our main pillars

Click here to view video

What did I learn?
VR/MR games need to be developed as VR/MR games, and not the other way around. What I mean is that you can’t just slap Mario into MR and call it a day. These games take advantage of depth, which is impossible on a 2D screen. In order for the design to work, the environment needs to facilitate the interactions of the player, and with that it involves depth. I also learned that there will never be enough Clorox wipes for the amount of sweat produced from 4 hours of extensive VR research.

So what now?
We have an abstract idea for a game, but nothing concrete. As of now, we seem to have a game concept we really like and want to finalize. The concept heavily borrows from Keep Talking and Nobody Explodes use of communication between players given different information. The team and I plan on prototyping this concept today to see if it’s something we want to pursue and move forward with, but I’m very excited to see where this goes!

 

References

‘Matrix’ vs. ‘Pokemon GO’: The Mixed Reality Battle for the Holodeck by Nicole Lazzoro

Rick and Morty: Virtual Rick-ality‘ Postmortem: VR Lessons *Burrrp* Learneby Alex Schwartz and Deven Reimer