Ted Alspach(toulouse)United States
TennesseeOne Night Ultimate Werewolf
Castles of Mad King LudwigCastles of Mad King Ludwig debuted at Essen Spiel 2014, and has since gone on to be very successful for Bézier Games, Inc., with several printings and a ranking in the top 50 on BGG. Prior to the analog version of the game’s debut, I was already deep in discussions with Jeremiah Maher, the developer who has done amazing work for Bézier Games, Inc. in the past, including the Suburbia app and the free One Night Ultimate Werewolf app, to produce an app version of Castles.
Throughout this diary I’ll bounce around from “I” to “we”. When I say “we,” I’m referring to the collaboration between the developer (Jeremy) and myself. There are a whole bunch of things that the developer did pretty much much on his own, too, and I’ll try to highlight those as they happened.
My involvement with the app version of the game is pretty much continually hands on; I view the end result of the app as a collaboration between the developer and myself. In many ways, it’s the same relationship I’ve had in my previous working life as a product manager for Adobe, Intuit, Corel and other software companies, but instead of developing productivity applications like Illustrator and payroll software, we get to work on boardgames! My job is to come up with the overall direction for the app, provide guidance on what features I’d like to see implemented, supply all the graphics, rules, and game design concepts from the analog game to the developer, provide input on the UI, design the AI based on my understanding of how the game works, create the framework for the campaign mode, and design all of the campaign levels. I’m also playtester #1, receiving builds at least once a week and running them through their paces and providing feedback to the developer as well as detailed bug reports. The two of us have regularly scheduled Skype meetings every two weeks in which we discuss the current progress of the app’s development, next steps and scheduling, and work through all the issues that have cropped up in the recent builds.
The developer’s job is to do the “real” work…coding and lots of stuff that I take for granted, because from my point of view, it all seems so effortless (because I’m not doing that work). And there’s an amazing amount of work that goes into an app like castles. I really can’t emphasize enough how awesome Jeremy is…he’s an independent developer, with no staff (but with a very supportive family), who did all the development work for Castles by himself in just under 16 months. Castles is a fairly complex game by itself, as you’ll see in this article, and if it was just having a workable game that would be one thing. Instead, it’s a rich app with a giant campaign mode, an amazing AI, and a very polished UI. Considering that there are so many boardgames that STILL don’t have an app, which are being done by much bigger game studios, this is a huge accomplishment. Also, during this time period Jeremy also did a huge update for the One Night app (which again is an awesome piece of cross-platform technology) and created a stand-alone cross-platform setup app for Favor of the Pharaoh.
Initial design and prep work
The Suburbia app had been out for several months when we started the initial design work on the Castles app, and Suburbia had suffered from some initially lukewarm reviews due to (1) a nasty bug that wasn’t caught in beta testing and (2) limitations in Game Center for online play. The bug was fixed quickly, but the Game Center issues continued to linger on. Even with that, the Suburbia app has held steady at 4/5 stars, and sold reasonably well. It continues to sell even now, 4 years after analog Suburbia was released, and more than 2 years after the app was released.
One thing about the Suburbia app which has limited its potential to reach a large audience was that it was tablet-only. There are about 7 times as many smart phones as tablets in the mobile ecosphere, so we were only reaching 1/8 of the possible number of mobile users. We really wanted to make Castles work on phones as well as tablets, and that ended up being one of the chief goals of the UI and graphical design: it had to provide a solid, quality experience on much smaller screens than a typical iOS or Android tablet.
The initial framework for the Castles app was decided early on: It would feature local multiplayer pass-and-play matches, matches against 1, 2, or 3 AI players, and a solid campaign mode. We decided to take asynchronous online play out of the mix almost immediately for three reasons:
1. The potential for issues/bugs is incredibly high on both iOS and Android platforms
2. Because of the Master Builder phase, there’s an additional player switch every game turn that slows things down for asynchronous play
3. A relatively large chunk of development time would have to be devoted to implementing it, resulting in either a delayed release or reduced features for the rest of the game.
The pass-and-play matches would be designed to match the physical board game’s style of play, with all features from the game present. For development purposes, this was the underlying framework of how the game would work in the app, and a logical place to start development. This included graphics, placing tiles so that doorways matched, implementing the point system, and all of the other systems like master builder pricing, king’s favors, rewards, etc.
The AI would be the next step in development. The Suburbia AI was good, but not great, and we were determined to make the Castles AI even better. This is a pretty daunting task for a game like Castles, where players’ choices for which tiles to buy are based on not just immediate points, but also long term structural goals and potential for rewards, and all the different end game VPs (King’s favors, secret Bonus cards, money remaining, and depleted stack points). And there’s another layer on top of that: the Master Builder mode meant that the AI would have to evaluate the other players’ intentions when pricing rooms, weighing the needs of opponents against the needs of the AI.
Finally, the campaign mode had to be really good. The Suburbia app received many accolades for its campaign mode, though it was fairly limited in scope and length. Our goal was to improve upon the Suburbia campaign mode in every way, giving the levels more variety, providing a more compelling system for level rewards, and making it the correct length and difficulty.
Initial gameplay implementation
The first step was getting the rules and graphics to the developer. Jeremy was given a prerelease copy of the game and the rules as a first step. With a thorough understanding of how the game was played, Jeremy then deconstructed the underlying physical structure of the game: The “table” area where the game would be played, how tiles could be placed, and how points were tabulated.
He then set out to make the system within the app for placing room tiles. Dale Yu (the developer on Castles) and I had already done this with the analog game, devising a grid system that all the pieces had to fit into. Each square in the grid corresponded to 25 square feet (a square with the width of 5 feet and the height of 5’), with possible doorways centered on each of those grid squares. This is the secret sauce that allows Castles rooms to fit together so nicely; all rooms (including the round 150 and 500 rooms) fit into this grid. For instance, a hallway is a 1x6 room, while a 100 room is a 2x2 room. Once I communicated this to Jeremy, things began to take shape quickly.
On January 20th, 2015, in our bi-weekly Skype meeting, Jeremy showed me an alpha build of the game. It wasn’t really a game at that point, but instead was a cobbled-together chunk of framework technology that allowed dragging rooms down from a transparent “contract board” overlay along the top edge onto a playfield that was covered with numbers (each number being a square in the “table” grid that served as the playing surface). Even this early, many of the UI elements that would make it into the game were in place: The highlighting of possible placement locations, and the rotate and cancel buttons.
The first build I received, a few days later, had the same things in place, but I was able to test it out for myself. Zooming was already implemented at this point, and I was thrilled to drag rooms around and have them connect. It’s the little things.
A bug is found - in the Analog game
You know how I was just going on and on about how awesome Dale and I were, coming up with that grid system that everything fit so neatly into? Turns out it wasn’t so neat after all. There was an issue that even after the game was printed, no one noticed (and I still haven’t seen anyone report it). The thing is, the octagon rooms are…wrong. Foyers and 600 rooms have their angles cut in the wrong places. The result is more aesthetically pleasing when the rooms are by themselves, but when you try to fit them together, they’re off.
This wasn’t an option in the electronic version; the pieces had to fit correctly. Unfortunately, it wasn’t so simple as to just adjust the shape; the artwork on the pieces was the wrong shape too, and it would have to be changed or the tiles would appear distorted. Fortunately, the artwork had been done in Illustrator, so it was relatively straightforward to modify the walls and contents to make them work in the app. If they had been pixel-based, we would have had to have them both entirely redrawn by the artist.
Initial AI work
The first step in implementing any AI is to get it to follow the rules. The AI works under the same set of restrictions that a human player does. It’s the decisions points, where it can do different things, that things get tricky. The first builds with the AI had the AI randomly choosing a tile from the contract board and randomly placing it on an available room. Not particularly earth shattering, but the first few times I saw it, it already felt like the AI was making decisions (it really wasn’t). Jeremy had implemented it so that if the room that was being placed needed to be rotated, it smoothly rotated from the contract board down to the playing area, a really nice, smooth effect that made it into the final game (though it looks like one of those “polish” kind of things we normally would have done late in the process to make the game look better). After that was in place, there was an option for what he called “rapid play” which took a fraction of the time, but wasn’t as graphically pleasing. This was for testing purposes to make sure there weren’t issues with placement or other decision making that would be implemented soon.
Two months in, the app had a lot of the fundamentals in place: a random setup of the contract board from a shuffled deck of room cards, tile pricing, Living, Activity, and Outdoor rewards, connection points, placement points, downstairs icon points, and score tracking. It was almost playable. In just two months.
Of course, it was a lot of the details that are in the in game that would extend the development time quite a bit, as we’d soon find out.
After about 5 months, most of the actual game was in place, and it could be played against a computer AI opponent (who was still just randomly placing things, not much of a challenge) or a human opponent via pass and play. A lot had been implemented in that time period: King’s favors, Bonus cards and Utility room rewards, Sleeping room rewards, Corridor rewards, Food rewards, depleted stack endgame points, Passing as an option, Endgame money scoring, the ability to view other player’s boards on your turn, and a status bar at the bottom of the screen that tracked actions from each player (initially used for bug tracking, but then kept in the game so that Human players could easily review what computer AI opponents had done during their turn if they weren’t paying close attention).
While Jeremy was busy coding, I put together a Castle AI document. This was my attempt to quantify strategic choices, boiling everything down to mathematical decisions. Here’s my initial AI concept notes:
Here are the main vectors for analysis by the ai:
1) Immediate points: how many points is each room worth? What is the cost per point?
2) Favor lead: Similar to what we did with Suburbia for goals. Being ahead on each of the favors is worth X points.
3) Reward value: what is the likelihood of completing a newly purchased room? What is the value of completing it?
4) Future turn look ahead: 2-3 turns out: placing stairs gets the ai no points now, but it will get the ai access to downstairs rooms and a free corridor (by completing the stairs).
5) Value of rooms to opponents: buying a room is not just points for the ai, but also might reduce the number of points for opponents by removing it.
6) Master Building pricing: Pricing rooms out of reach of opponents vs. pricing them just within reach to maximize money during the MB turn.
7) Personal Bonus card valuation: This should always be considered.
That was a very high level look at what the AI needed to do, without any mathematical solutions. I also wrote up another document specifically for optimal AI room placement. You’ll see in the final game that the AI places rooms really well most of the time, particularly using its ability to close off multiple doorways at once with a single room placement. However, there are so many variables and options, that in order not to overheat your device, a lot of shortcuts were put in place, so once in a while you’ll see the AI do something a little unusual, like blocking off a doorway or not leaving itself enough space to place a room next to a doorway.Quote:AI Mathematics
When the Skynet AI gains consciousness, it will do so because it understands math better than we do. Until then, we have to provide a computer AI with all the building blocks it needs to make human-like decisions. I wrote up an initial attempt at optimizing computer choices with a formula based on the most important concept in this and many other VP-based games: how many points will you have at the end of the game as a result of each decision you make? That formula is:
End Game Points = Current Points + (potential) In Game Points + (potential) Final Scoring Points
It’s the “potential” points that are really tricky. Each room you place will give you points instantly, but most also have these “potential” points that may be realized later in the game. For instance, when you place a Dining Room, it gives you 1 point, plus 3 connection points if you place it next to a Living Room type room. But it has the potential of giving you additional points: 3 more from placing another Living Room on its open doorway, the points you would get from having an extra turn when the room is completed, the points you get if you obtain a Bonus Card that is either Food rooms or 300 rooms, King’s Favor points, if this gives you enough to move you up a level for those end game points, 2 points if the 300 stack is depleted, and additional connection points from all the room tiles that have Food room icons on them that can be connected to the Dining Room. These potential points change in potential value from the beginning of the game, when there are much higher possibilities that you’ll obtain the other things needed to get those points, towards the end of the game where if you’re the last player to play, they total 0, since nothing can happen after the tile is placed. So Potential points are variable based on time, availability of other tiles, the amount of $ each player will have throughout the rest of the game to purchase more rooms, bonus cards, and other players’ boards (who might be interested in the same rooms or bonus cards that are potential points for the Dining Room).
What this really meant was that all of these things needed to be quantified so the AI could make the best possible decision. Here’s an example: For the Dining Room, the AI has to determine what the value of the Food room completion reward (an extra turn) is. To do that, it needs to figure out the Average Points per Turn it thinks it will get for the rest of the game. This is the math for that:
APT (Average Points per turn) = ((CRP+IGP)/(NTR+NTT)
CRP is the Current Points.
IGP is the potential In-Game Points, which is equal to Potential Connection Points (CP) + Potential Completion Rewards (CR) + Potential Room Points (RP)
CR = NTR/average number of open doorways per incomplete room * Average Reward value of incomplete rooms
RP= NTR * Average value of known + unknown room placements
NTR is the Number of Turns Remaining = (Room cards remaining/number of players) + (incomplete Blue rooms (played or on the contract board)/number of players)+((unpurchased hallways + unpurchased stairs)/2)/number of players)
NTT is the Number of turns taken
All that just to figure out what the reward for a Food room is worth to the current computer player.
And the AI has to compare that Dining Room value to the value of placing every other room that’s available (including stairs/hallways, and the value of simply passing and taking $5), with more math that evaluates the point value of $$ at that point in the game, which is subtracted from the value of placing that room.
The weird thing is that this process seems SO complex when it’s broken down like it is above, but our brains do a lot of this stuff in the background, while we’re talking about an entirely different subject, like is it really worth the extra cash to purchase the amazing Broken Token wood insert for the analog Castles of Mad King Ludwig game (it is, go get it now, it’s awesome).
The good news is that all of this math applies to the Master Builder as well, though he’s got to run through all of these computations for each player, and do some other fun computations to balance the pricing of rooms he wants versus the rooms that other players want.
The AI was continually tweaked throughout the development process, getting better and better. I’m really happy with some of the things that happen in the game, including when AI players collude against a Master Builder (I’m sure you’ll read about this in some cranky player’s online post if you don’t run into it yourself). The AI is not friendly, its only goal is to get the most points, and it will do whatever it has to in order to defeat its opponents. We didn’t program any good sportsmanship in there, and definitely no remorse.
One of the aspects of the AI that I’m particularly proud of is that it won’t buy really high priced rooms unless the value (End Game Points) is appropriate for the cost. This is also quantified in the app, with another series of formulas that takes a whole bunch of things into account.
Despite the fact that there’s a ton of AI computation going on for each decision it makes, the real time that the AI takes is a tiny fraction of a second…computer players take their turns instantly, without you having to wait for them to think through all of these scenarios. Three cheers for living in 2016 where technology has allowed AI’s to make these complex determinations so quickly. In 10 years, we’ll look back at this time as when the AI’s for most games really was good enough for all practical purposes, and that we should have stopped with the technological advances at this time. That is, if the hopefully-benevolent robot overseers of 2026 allow us to dwell on the past at all.
Completing the basics
The core game, including 98% of the rules, a workable AI, graphics, and multiplayer (through pass and play) was in place by the end of August last year. Releasing by the end of 2015 seemed like a reasonable goal, as we only had the following to do: Add the campaign mode, Enhance the UI, make the AI polished, and squish any outstanding bugs.
Each one of those things, however, took about 2 additional months of work. In hindsight, August 2015 Ted was such an optimist!
While players liked the campaign mode in the Suburbia app, we knew that it wasn’t as good as we had originally wanted, and that we wanted to do something better…a lot better…for Castles. While Jeremy was busting his butt coding the basics of the game during those first few months of development, I went to work on a campaign PRD that outlined the direction of the campaign; how it would work in terms of different levels, rewards, unlocking, and length, and also the components of the campaign that would make it interesting. We wanted to make each campaign level unique, with different goals and compelling gameplay, so that each new level that players opened up would be fresh and exciting.
The first thing that Jeremy did with the campaign was to create a campaign building mode for me, so I could have the tools necessary for level design. That was a huge help, as it allowed me to test different scenarios and try out all sorts of things that would have been impossible otherwise. Throughout development, the campaign builder got more and more features, to the point now where it’s really just a tool that takes what I’m envisioning for a level and materializes it in the app, and makes it pretty straightforward for Jeremy to implement in the actual app.
Designing the levels was as close as I got to my game designer roots during this process, as each level really is its own kind of game, bracketed by the framework of the basic Castles rules. In many ways, it reminds me of designing AoS/Steam expansion maps (of which I did a few dozen back in the day), where all you have to do is change one or two rules, very carefully and deliberately, and the result can totally reinvigorate the base game. Because each level can be any size and shape, I created an Illustrator document that allows me to quickly design the base shape, size, and location of key rooms (that’s right, in the campaign, some levels start with rooms already in place). I also designed a spreadsheet that captured all the key information in each campaign level, every thing from number of opponents to which rooms are available to what the specific crown goals are.
Meanwhile, Jeremy came up with the basic concept and “story” of the campaign, which follows you, a novice castle builder, as you work your way up the Middle Rhine in Germany building castle after castle, each of them based on the actual castles that line both sides of the Rhine. I’ve been to several of these castles on side trips from my annual trip to Essen each year, and while the castles you’ll be building are quite different than what’s currently available for touring, Jeremy managed to capture the essence of the historical flavor of those castles, including a little introductory blurb on each one.
From September through most of April, the tutorial and campaign levels were continuously tweaked and redone, and I’m super happy with the way they turned out. It really adds a lot to the Castle game playing experience and makes the app hugely replayable.
Not only that, but we have plans for additional levels to be added to the game after it is released. I’ve already been working on the designs for a few of them.
The UI had been workable up through August of last year, but we had a huge list of things we wanted to add or change to make it better. Little things like how the end game score was displayed, how the player interacts with the new match options, how help should be available, what the background of the tiles should be, sounds, collapsing the contract board so that phone users could have more screen real estate, tapping rooms to temporarily zoom them right on the contract board, and so much more. Dozens of little things that add up to making the app feel as polished as possible.
The amount of work required for each little change, however, can’t be overstated. It’s one thing to agree to implement a UI feature, but then there’s the “how” that feature is implemented, the actual coding to get it to work, the testing to make sure it meets our expectations, modifications to the original design and more coding to make those modifications happen, and then of course fixing all of the bugs which pop up throughout that process.
As the project drew closer and closer to completion, Jeremy kicked into overdrive to make sure every little aspect of the UI was as perfect as possible. All sorts of little extras started appearing in builds…some of these things we had discussed, others he took the initiative on and just did them. Fortunately, Jeremy is one of those developers with a knack for both UI and graphic design, so it was incredibly rare for me to see a new feature and spit up on it….most of the time I was pleasantly surprised, like in the last few builds where a preview of the campaign level screen was shown (I had pinged Jeremy for a while about that possibility, but figured we were too close to release to get that in place), or when the music for each of the campaigns had a different starting point from a very long playlist.
Finally, even though we decided to forego online multiplayer (and in hindsight it was a good decision, as it allowed the rest of the game to be much much better as a result), Jeremy hooked up Game Center (iOS), Google Play Games (Android), and GameCircle (Kindle Fire) to track a variety of achievements within the app. It’s yet another little thing that makes the app feel even more polished and refined.
Polishing the AI
After testing the app for months and months, there were a lot of areas where I thought the AI could be improved. Of course, many of those things, while conceptually little, ended up being exceptions that happened in various circumstances. Getting the AI to place hallways properly, with all of their doorways and possible orientations/locations, was particularly tough, and even now once in a while the AI will place a hallway someplace that makes you scratch your head. The math involved with valuing room tiles that were related to the Kings Favors was another area that needed work: For instance, while it’s worth 2 points to be in first instead of tied for second (8 vs. 6 points), it’s really a 4 point differential between you and the player you’re tying with. If the AI is already crushing that player in points late in the game, the 4 point differential isn’t important, but the 2 additional points still is.
The AI process for the master builder also took some extra time. The math was all in place, but it still would price rooms unexpectedly, which felt random, and not all that smart. That required a lot of tweaking, but now I’m pretty sure that most players will be constantly amazed that the computer player always seems to price the rooms you want either just barely within reach, or, if it’s the perfect room for you, at a price you can’t afford. The only people in real life who do this process that well are your AP-prone friends, which isn’t an issue in the app (none of the computer opponents have any AP issues, fortunately).
All in all, the AI turned out really well, and while a really good human player can win some of the time, you’ll find it to be a good challenge.
Bug Squishing & Playtesting
The longer the development of any software project goes, the more bugs get entered into the game. Well…sort of. In my 20+ years of Product Management experience, it’s not just the time, but the amount and kinds of additions and changes that happen over time that have the ability to make bugs proliferate everywhere, often in places that are unexpected given the nature of recent changes.
Learning from Suburbia, where we did a very small amount of external playtesting, for Castles we brought on a whole bunch of beta testers back in January of this year, and then tripled that number in March. Getting feedback on the app from testers with different devices was great, especially since now we aren’t just supporting iOS and Android, but we’re supporting phones and tablets on both platforms. While we didn’t cover all of the possibilities on Android thanks to the ridiculous number of different devices out there, we had a good cross section that should have caught the majority of device-specific bugs.
In addition to just finding bugs, the playtesters served another, really valuable purpose for us…they helped us get the campaign to just the right level of difficulty. My personal preference in campaign modes in games is that completing them should be really really really hard, so much so that most people will *never* complete them fully, or earn all the achievements/rewards for them, but that some expert players who stick to it will indeed be able to get them. If you aren’t tempted to throw your device across the room because you just can’t quite get a level, it’s not difficult enough. Fortunately, Jeremy isn’t nearly as hardcore, and between the two of us, we brought the difficulty level of the campaign down to something that can reasonably be finished by pretty much all players somewhere in the 5-10 hour timeframe, with a ton of replayability, as each level is totally different from every other level. When you do complete a level successfully, you always have the option to play it again to try to do even better. We also included a reset button for the campaign so if you do complete it, you can go through it again, and see how much better you’ve gotten since your first try. Our playtesters let us know which of the campaign levels were fun, which ones weren’t (some we scrapped and replaced), and if each of them are indeed completable in a reasonable number of tries. The end result is a campaign that’s super-compelling with an ever-increasing level of toughness as you make your way through it.
Release process and publication
To shed a little more light on the release process for apps, traditionally, Apple has taken 7-10 days to approve a new iOS release. Of course, there is the possibly that Apple spits up on the submission because of an actual bug or because one of the multitude of mostly-unnecessary guidelines required from Apple isn't addressed. So upon submission to Apple, we usually expect to have to wait at least a week until the app has been approved and is available in the App store.
Like most developers who plan on launching their product on all platforms simultaneously, priority #1 was getting the iOS version done and submitted due to those typical long review times. So our primary focus with the last few builds had been on iOS bugs and Game Center implementation. Once we had an iOS build we felt was good enough for submission, it was submitted to Apple, and then the focus changed to Android devices, including Kindle Fires. The Google Play submission process is very very fast (within a few hours, usually), and the Kindle Fire submission queue takes less than 24 hours to get an approval, so we gave ourselves about a week after the iOS version was submitted to squish any Android bugs we could find with Google and Kindle devices. And then the unthinkable happened…Apple approved the app less than 48 hours after it was submitted! It turns out that they’ve shorted their review time significantly recently. It’s nice to see Apple putting their truckloads of cash to good use for a change by staffing up the app review group.
As I finish writing this designer diary, there are less than two days before Castles of Mad King Ludwig is released, and we’re stoked to see how players respond to this labor of love. On a related note, if you enjoy the game, please take a few minutes to rate and review the game on the appropriate store…the number of people who rate the game relative to the number of downloads is always a tiny fraction, and for games that the general public hasn’t heard of, those star ratings and customer reviews are their only exposure to what people think about the game. As a boardgamer, I suggest that you do this for all boardgame apps…the more ratings there are (especially for good, solid games), the more downloads of those games will happen, which funds developers allowing them to produce more and better boardgame apps. It’s a way you can help prime the pump and ensure that the boardgames we all love continue to be converted into great apps!
Thanks for reading this…I hope this provides some insight into the development of not just the Castles of Mad King Ludwig app, but other boardgame apps as well!
Castles of Mad King Ludwig is available now:
- iOS Universal, $7
- Android, $7
- Kindle, $7