Fairmeadow Fair, session 1

Campaign Summary | Session 2 →

Fairmeadow Fair is a scenario that I use to introduce people to tabletop role-playing games.  My goal is to provide a wide range of activities so players can engage with the things that interest them: parties, conspiracies, thefts, fights. I don’t want people’s first introduction to role-playing games to be a fight to the death against implacable foes.  I’ve run this scenario for several groups, but this group asked me to return and make the one-shot into a campaign.

Our heroes are Lucia, a risk-averse female human Paladin, and Gleador, an impulsive & creative male elf Druid. They are headed to the town of Fairmeadow for the famous Fairmeadow Fair, which attracts people from all across the region. There will be good food, entertainment, new people, and plenty of excitement. I show them a sketch of the city, with major landmarks like the main roads, fairgrounds, town halls, and both inns.  Gleador wants to check the perimeter, so they go clockwise, right past the Brace of Pigs inn.

A map of the city of Fairmeadow

There’s a commotion at the Brace of Pigs and an Elf wearing a big backpack bursts out of the back door and runs out into the fields, away from town.  Jimmy, son of the innkeeper, gives chase with a broom, yelling, “Stop, thief!”  Our heroes try to stop the Elf with the backpack, but he turns into a bull and plows past them!  Gleador is a thing-talker, so he can shape-shift into things of the earth as well as animals.  He attempts to turn into wheat and entangle the bull, but he fails and ends up buried to the neck in dirt.  Lucia grabs a rope from her adventuring gear & attempts to lasso the bull, but is tangled in the rope and falls over.  Jimmy & his father Hobert catch up & extricate our heroes.

Lucia feels she must redeem herself from this embarrassing failure by capturing the bull.  Hobert explains that the man pushed his way into the inn’s storeroom and stole his special wine. The inn has a tavern in the front, a kitchen & storeroom in the back (that’s where the bull-elf exited) and rooms for rent upstairs.  Each year at the start of the Fairmeadow Fair, the Brace of Pigs Inn serves this special wine, and at the end of the fair, they bottle the next year’s batch.  It’s a point of pride for the inn and a tradition for the fair, so Hobert is very upset!

Gleador excuses himself (he doesn’t want everyone to know he’s a shape-shifting Druid) and turns into a crow.  He mobilizes the crows in town to fly out over the fields and look for the bull elf. They can cover a lot of area, but will take a few hours to search.

Lucia swears herself to a quest: she will recover recover the wine!  Her god grants her immunity to piercing weapons and senses that pierce lies but in return, she must demonstrate temperance. Even if the wine is recovered before the fair is over, she cannot eat any fair food or drink any alcohol (including the wine she’s trying to recover!)  Since the bull-elf knew exactly where to get the wine, Lucia suspects an accomplice, so she starts questioning people in the tavern. It’s crowded, since even people who aren’t staying here have come for lunch.  She overhears a woman excuse herself & leave the tavern & knows that her excuse was a lie: she wanted to avoid Lucia & her questions.  Lucia follows her.

The woman heads to the busy marketplace, but is unable to lose Lucia in the crowd. Lucia asks her if she knows the thief. The woman denies it, but Lucia knows she’s lying, so she grabs the woman. The woman makes a scene, yelling at Lucia & slapping her across the face. Lucia is forced to back off by angry onlookers, but later corners the woman in a more private place.  Gleador has turned into a dog & recruited other stray dogs in town by promising that Lucia will give them all her fair food this weekend. The dogs form a perimeter around the strange woman, but she turns invisible and runs off!

GM: Wait, you still have that boon that pierces illusions, so you can still see her.

Lucia: Yup. I act surprised, then follow her.

The crows report that the bull is in a swamp some miles to the east, and that’s where the strange (and apparently magical) woman is heading. At the edge of the swamp, the stray dogs balk. The swamp seems unnatural and they are scared of it.  Gleador sends them back to town to guard the inn and report any strange scents.  He attaches a note to one of the dogs explaining that he and Lucia are on the trail of the thieves and to please give these dogs some scraps.

This delay has let the woman get far ahead, so Lucia has to follow her tracks through the swamp instead of just watching her walk across the fields. Lucia’s not very good at that & blunders into some quicksand. Gleador turns into a tree with an overhanging branch so Lucia can pull herself out. As they are struggling to get free, the woman appears, having heard the commotion.

Gleador: I continue accessing the situation as a tree.

The woman fires a lightning bolt at Lucia, which misses, but hits Gleador, setting his branches on fire. Lucia grabs a burning branch and wields it against the woman, who meets her with a dagger.

Gleador: I think I attack her at this point.

Gleador turns into quicksand to overwhelm the mysterious spellcaster. The flaming tree bends and pours over her, the flames going out as the branches turn into sand. The sand engulfs her, knocking her over, and reforming into Gleador’s Elf form pinning her in a submission hold.  She knows she’s beaten, so she stops resisting physically, but yells and curses the town and our heroes. “It’s not enough that you pillage my crops and claim them as your own, but now you invade my home and attack me!”  Our heroes thought that she and the bull elf were the ones doing the pillaging, so they ask for clarification.  It turns out that the secret ingredient in Hobert’s special wine is a rare herb that Samantha (that’s the witch’s name) grows here in the swamp.  Hobert has been sneaking in and tearing out the herb, and Samantha stole his wine back as revenge.

Lucia is moved by their plight. Lucia & Gleador bring the wine, Samantha, and Ferdinand (that’s the bull elf) back to the inn. They return the wine to Hobert in the tavern, in front of all his customers, and give Samantha and Ferdinand credit for the wine’s unique taste.  The customers cheer Samantha and Ferdinand because they love the wine. Hobert can’t say anything against Samantha & Ferdinand lest his wrongdoing be revealed. But he’s got his wine back and the yearly tradition can proceed, so he’s happy too.  Everyone’s happy, except Lucia, who was rewarded with a bottle of excellent wine which she isn’t allowed to drink.

Campaign Summary | Session 2 →

Bodypaint Generator: Just draw some spots

Now that I have all the photos from the Solstice Cyclists organized, I can start looking at cyclists, breaking down their paintjobs, and creating generators for them.

There are many ways I could implement the generator. Do I make an interactive webpage (Javascript, SVG) so that the public can create and modify paintjobs?  Since people are complicated three-dimensional objects, maybe I should use a 3D program, like Unity. To keep it simple, I’m using SVG.js to draw patterns on a square in a web browser. I’ll teach it about human bodies later.

The first cyclist I photographed just had some spots. (Here’s a photo. content warning: nudity)  How hard can drawing some spots be, right?  Harder than I thought.

Spots are mostly the same size, but not exactly,  so the generator has baseSize and sizeVariation parameters. I defined some constants for colors and had the generator pick a few each time.  Now it just has to place those spots randomly on the canvas.

Figure 1. Randomly placed spots.
Figure 2. randomly placed spots

This is not how a human would draw spots.  Some spots overlap, and the distribution of spots is very uneven.  Humans tend to draw spots roughly the same distance from each other, but will avoid placing the spots in a grid.  How can I imitate that behavior?

  1. define a reasonable distance between spots, but vary it a bit (baseDistance, distanceVariation)
  2. When placing a new spot, start from the position of the previous spot & move some distance in a random direction.
  3. Make sure that position is still on the canvas (bounds checking)
  4. Make sure that position isn’t too close to any other previous spot
  5. If placement fails, repeat step 2 again.
  6. If placement fails too many times, give up.

After writing a few helper functions for polar coordinates, I tried again.  I added a small white square to indicate an attempted placement that was rejected.

Figure 3. Place next spot near previous spot.

The spots don’t overlap or crowd each other anymore, but the algorithm tends to make chains of spots instead of filling regions.

Figure 4. Having trouble placing spots.

In Figure 4, the algorithm tried hundreds of times to place the last two spots. It should have given up after 10 attempts, but this bug accidentally visualizes the band of acceptable distances between spots.

In order to make the spots form clumps instead of chains, I changed where I tried to place the next spot.  Instead of placing near the previous spot, I’d try to place it near the first spot. If that failed a number of times, the region around the first spot must be too crowded, so I’ll try near the second spot, then the third, and so on.

Figure 5. A clump of spots
Figure 6. Failed placements visualized

This looks more like what I would produce if I was drawing spots on a piece of paper, or someone’s skin. Figure 6 shows that the algorithm fails to place a spot quite often, but the execution is still fast enough to seem real-time to a human observer. I may have to worry about efficiency later, but for now it’s fine.

This algorithm has a maximum number of spots it attempts to place. If it can’t find an open space near any of the existing spots, it will give up early, but it also doesn’t keep going to fill the entire canvas. Fortunately, that’s an easy fix. Instead of looping a set number of times, use while(true), and then the emergency break statement becomes the normal way of exiting.

Figure 7. Enough spots to fill the canvas.
Figure 8. Enough spots to fill the canvas.

Now that I like the arrangement of spots, I can easily switch out the circles for anything with roughly the same dimensions: letters, stars, squares, or flowers.  This algorithm won’t work for objects that are significantly bigger in one dimension than another, like fish, words, or snakes.

In conclusion, I can fill a space with random spots in a way that imitates how a human would do it, which isn’t that random at all.

Solstice Cyclists part 2: data ingestion

Start with Part 1 to learn how I captured 4000 photographs of the mostly-naked, mostly-painted Solstice Cyclists.

Inadequate spreadsheet

My naive start was a spreadsheet with columns for what people wore, what they rode, and a description of their paint.  I used the row number of the spreadsheet as the cyclist ID & tagged images that contained a certain cyclist with that number.  I had columns for top, bottom, head, and face clothing.  That didn’t account for people wearing fairy wings, or sunglasses & a fake beard at the same time. Putting each piece of data in a separate column meant that I could search for “red” or for “sunglasses” but not for “red person wearing sunglasses”  So the spreadsheet was not expressive enough to capture the information & search was inefficient, so dupe-checking took a long time.

Database design

Instead of giving each cyclists clothing slots that could have either 0 or 1 items in each, I created a many-to-many relationship between clothes and cyclists. Each piece of clothing also had a “slot” attribute (top, bottom head, face, back, or other). So a cyclist could wear any number of items, and each item would keep track of where it was worn.  Cyclists & images also had a many-to-many relationship. Vehicle & Sex were simple enumerations.  Descriptions remained as plain text.

Spreadsheet to database.

Converting all the data in the spreadsheet to DB records let me remove any inconsistencies in how I entered the data in the spreadsheet, e.g. “wig, blue” or “blue wig”.  As I added clothes & vehicles to the DB, I searched & replaced those words in the spreadsheet with the DB IDs.  I had to be careful to replace only words in the appropriate columns, since the plain-text descriptions sometimes referenced clothing or vehicles. Sometimes I missed and found a description like, “Mostly red, wearing a green 73” which is quite confusing.

Once I’d replaced all the words with database IDs, I exported the spreadsheet as a CSV file and wrote a PHP script to ingest it into the database. I chose PHP because I’ve already done a lot of SQL with PHP for my Atlanta Fashion Police & convention gallery projects.  The script was pretty simple.  The line number was the cyclist ID. The first column contains an ID for Table X, the second column contains an ID for Table Y, and so on. My PHP server has a maximum execution time of 30 seconds, so I added parameters to the script to only ingest  100 lines at a time and ran the script multiple times. Since it’s a private PHP server that doesn’t have consumer traffic, I should have just increased the timeout, let the script run, then changed it back.

While building the spreadsheet, I had been tagging photographs in Lightroom with cyclist IDs. I exported the tagged photographs into a certain directory, then wrote another PHP script to iterate through all files in that directory, read the EXIF data, and fill in the images_show_cyclists table.

New frontend

This is my process for identifying cyclists going forward.  I look at an image in Lightroom and find a new cyclist who was not in the previous image.  I may scrub back and forth in the timeline to get a better view.  I fill in the search/create page to see if I have already seen a similar cyclist.

New “clothing” dropdowns are created as existing ones are filled in, so I can specify any number of clothing items. The “description” field checks for each word in order, so “blue yellow” matches both “blue & yellow stripes on arms” and “blue torso, red arms, black legs, goofy hat, yellow face”

Clicking “Find matching cyclists” will either show a list of cyclists with the features I’ve selected, or unlock the “CREATE” button if there are no matching cyclists.  Each matching cyclist is a link that takes me to a page that lists its features, what images it appears in, and previews one of those images.

Having a picture of the cyclist on the “view cyclist” page makes it much easier to confirm if I’ve actually found the cyclist I’m looking for, since I can just look between the two images.

The “EDIT this cyclist” page is almost identical to the search/create page, but instead of starting blank, it starts with data filled in from the DB.

Cyclists make multiple laps and groups tend to stick together, so if I see one cyclist back for a second lap, I can look at photographs from her first lap and identify some of the cyclists around her as well.

Preliminary data

I haven’t examined all the photographs yet, but here are some things I’ve discovered so far.

In 2012 photos taken over 50 minutes, I identified 1475 Solstice Cyclists.

Here’s a graph of how many passed over time.  Click to expand. 1 pixel vertically = 1 cyclist. 1 pixel horizontally = 1 second.  Red represents cyclists on their first lap. Green is the second lap. Blue is the third.  There are gaps when my view was blocked, the street was empty, I had to switch memory cards, and when traffic stopped & I paused the automatic camera.

The male/female split is 49/51, even closer than Dragon Con’s demographics, and very different from the split seen in most photographers’ galleries, in which images of women dominate.  Hmmmmmm. How curious.  HMMMMMM.

1300 people rode bicycles, which is to be expected from a group called the Solstice Cyclists, but I also saw:

  • 39 people on foot
  • 23 on inline skates
  • 6 on roller skates
  • 24 on scooters
  • 5 unicycles
  • 10 people on 5 tandem bikes
  • 7 skateboards
  • 2 pedicabs, with 2 drivers & 4 passengers

I also identified some groups & popular “costumes”

  • 11 giraffes
  • 45 mermaids
  • 8 Care Bears
  • 39 people wearing actual, normal clothes
  • 27 Wonder Women

I still have around 700 images to look through, so these numbers will change a bit, but as you can see from the graph, most of the cyclists in these later images are back for another lap, and there aren’t many new cyclists.

Once all that is done, I can start (START!) on the actual meat of this project: creating a grammar for bodypaint based on these thousands of examples & generating new paint patterns.

Solstice Cyclists part 1: data capture

The Solstice Cyclists, an intentionally-disorganized group of mostly-naked, mostly-painted cyclists who precede and overwhelm the Fremont Solstice Parade each year are one of my favorite groups to photograph.  They are colorful. creative, joyful, and high-energy.

Last year I decided I needed a photo of every single Solstice Cyclist.  (Does this seem familiar?)  I had two reasons:

  1. Statistics. Photographers’ galleries contain mostly women. Is this disparity caused by population imbalance or by selection bias? Solstice Cyclists are famous for being naked cyclists, but some people wear some clothing. How common is that?  What protective device is more common: bike helmet or sunglasses?
  2. Source data for grammar. I want to expand my bodypaint generator to use graphics, and I want the generator’s output to mirror actual paintjobs. Once I identify all the different cyclists, I can study their paintjobs, break them down into parts, and put those parts back together in novel but believable ways.

I got a tripod and a timer so one camera could automatically photograph everyone who passed while I did my normal photography beside it. I had to juggle a surprising number of factors to place that camera properly.

  • To avoid being blocked by spectators, the tripod needed to be either right next to the street, or high enough to shoot over their heads. I saw a few balconies, stout tree branches, even a bridge, that could get the needed height, but that brought new problems. Most paintjobs photograph best from the front, and bicycle riders tent to lean forward, so a camera that is too high has a bad angle. Also, accessing those high places is non-trivial, so I opted for a front-row seat.
  • Aiming down the street at approaching cyclists is my usual MO, but an automated camera will have trouble with that.  Since the camera is looking down the street, cyclists in the same image can be 10 feet or 100 yards away. How does the camera know which one to focus on, and which ones to leave blurry?  Cyclists in front will obstruct the camera’s view of cyclists behind them.
  • The route turns a few times. Maybe setting up at a corner will alleviate these issues. Setting up just after a corner sets a maximum distance at which cyclists will appear. Any further and they’d be in the crowd. There’s still the problem of cyclists approaching the camera and filling the frame, blocking other cyclists.
  • What about aiming across the street?  Cyclists will stay about the same distance from the camera as they cross the frame, and they are only 3 or 4 abreast, as opposed to unlimited ranks front-to-back, so obstruction is less of an issue.  Since I’m as far forward as possible (so spectators don’t stand in front of me) cyclists on the near side of the street will be very close. My lens might not be wide enough to capture their whole bodies, and they will cross the frame very quickly, maybe in between ticks of the automatic timer.
  • Thus, I decided to shoot across the street at the cyclists on the far side of the road. The frame is wide enough at that range that I’ll get several photos as each cyclist passes. Three-quarter to side view is not ideal, but still pretty good.  I had to accept cyclists on the near side sometimes blocking the shot, but it was the best I could do.
  • Oh, also! Position along the parade route matters as well.  The Cyclists circle back so they stay close to the parade (human-powered floats are much slower than bicycles). Near the end of the parade route, there are fewer spectators and no returning cyclists to block my view, but I only get one chance to see each cyclist, and some cyclists leave the route before then (mechanical failures, etc.) Closer the start of the route I get multiple chances to photograph each cyclist, but more obstructions.

The day before the parade I scouted the parade route, looking for places to set up.

I chose the spot on the right, which is near the “center of the universe” sign on the east side of Fremont Ave. The tree gave some protection to the tripod. It’s a lot easier to accidentally trip over a tripod than it is to walk into a tree.

During the parade I kept looking over at the “shots remaining” counter on the tripod-mounted camera like the marines watching the sentry guns in Aliens.  “That number is going down.  It, it keeps going down.  Are we going to run out before they stop coming?”  The automatic filled a 32GB memory card and I had to swap for another in the middle of the parade.  Whenever a traffic jam stopped the stream of cyclists passing me, I’d pause the automatic camera to save disk space.

In all the automatic camera captured 2644 images.  That’s equivalent to an entire day of Atlanta Fashion Police, except it took only 63 minutes, not 16 hours.  I took an additional 1400 photos with the camera I was holding.

I considered using computer vision to help me identify cyclists, but even nudity-detecting algorithms were bamboozled by the cyclists’ coloration. So I couldn’t even get “Yes, there is a person in this photo”, much less, “There are 6 people in this photo, and the guy with the red stripes and sunglasses has appeared in 3 other photos.” Time to use my eyes, the best pattern-recognizers I know! I thought I could store all the information in a CSV file. I’m only recording a few pieces of data for each cyclist, do I really have to make an SQL database with webforms to search and update it?

1064 rows later, I realized that, yes, I did need that DB.  Since cyclists could make several laps, and I was gathering data from both cameras, I needed to check for duplicate cyclists often.  Ctrl-F in a spreadsheet wasn’t cutting it.

Next time: building that database, and a few insights from the data.

PROCJAM 2017: Spaceship Wrecker

PROCJAM is a relaxed game (or anything) jam with the motto “Make Something That Makes Something”. It basically ran from 4 NOV 2017 to 13 NOV 2017, but the organizers emphasize not stressing out about deadlines.

Procedural generation is my jam, as you may have noticed from my Pathfinder Twitterbots and the little generators on my site.  I didn’t want to generate terrain, caves/dungeons, or planets, because so many generators like that already exist.  I had no shortage of ideas to choose from, though.  Deciding which one to pursue took quite some time!  Some potential projects:

  • Generate spaceships from subsystems that produce & consume various resources
  • Generate fantasy creatures with different senses & capabilities, and individuals of those races who may have disabilities or mutations
  • Generate buildings that accommodate multiple fantasy creatures with widely varying needs.
  • A MUD Twitterbot with emoji visualization
  • Generate footprints that players can follow, and a field guide that identifies the creatures that leave the footprints.

The generators I want to make have lots of constraints and dependencies. Many generators are stateless: the number of projectiles the gun fires can be chosen without regard for the projectiles damage, or fire rate, or elemental affinity.  Not so the fantasy creatures, who won’t use a spoken language if they can’t hear, or the spacecraft, who can’t use a science lab without a generator to power it.  I feel the added complexity in generation is worth it, because it forces the generated artifacts to make sense.

I chose “Spaceship Wrecker”, which generates a spaceship full of subsystems, then lets the player launch an asteroid or bullet to damage some of those systems and watch the failures cascade across the ship. In my mind I envision players boarding wrecked spaceships, prying open airlocks, re-routing cables, and getting the ship back online, but let’s start small, build up incrementally, and see how far I get in a week.

What parts do I need, and what do they depend on?

  • Engines (move the ship)
  • Fuel tanks (supply the engines)
  • Generators (supply electrical power to all kinds of parts)
  • Life support (supply air to rooms with people in them)
  • Crew quarters (supply crew to operate various parts)
  • Command/cockpit/bridge
  • mission systems (sensors, cargo, labs, etc.)

This gave me my list of resources:

  • Air
  • Crew
  • Fuel
  • Power
  • Thrust (technically that’s two resources: engines overcome inertia with thrust, but it’s simpler to create demand for engines by saying that parts consume thrust.)

I built some placeholder assets as Unity prefabs: 1-meter cubes, color-coded by function, with positive or negative resource values to indicate what they produced and consumed. At first I kept track of supplies at the ship level. If the need for power across all parts on the ship was X, I added enough generators to supply X power.  I didn’t care which generators supplied which components yet.  I would add some graph system later to distribute the resources.

I could specify a few components to start with, and the generator would add components until all components were satisfied.  Fun edge case: a ship with no components has no unmet needs, and thus is a valid ship.

Right after finishing that working algorithm that created sensible ships, I changed my data model & threw that algorithm out for a better one.  I made “ResourceProducer” and “ResourceConsumer” components to add to spaceship parts. Producers could form connections to Consumers, so each component knew how its resources were allocated.  When a component was damaged (remember the player-launched asteroid?) it could notify its consumers that the supplies were gone. Those parts would shut down, and their producer components would revoke resources from other components, spreading destruction across the ship.

Here a part has been hit by an asteroid (indicated by the green line) and it turns red to show its not working. Events propagate and three other components also shut down.  Success!

Let’s talk a bit about that asteroid. I imagine a tiny thing going extremely fast. Anything it hits is wrecked, and it penetrates to a significant depth. Multiple parts can go offline from the initial hit, if it’s lined up correctly.  I let the player orbit the ship with the camera, then click to launch the asteroid from the camera position to the cursor position. I RayCast to find the impact point, then spawn a trigger volume oriented in the same direction as the RayCast.  Spaceship parts know they are damaged when their colliders intersect with the trigger. I took a few tries to get the trigger volumes transform correct. I learned that some angle vectors contain Euler angles, so the 3 components degrees of rotation around each axis. Other angle vectors are unit vectors that point in the desired direction.

The ring structure was a placeholder for a more meaningful arrangement that I had been putting off because it was difficult. I wanted the parts clustered together because that’s how we envision cool spaceships and because the player could then line up asteroid impacts on multiple parts.  Parts should be connected by wires or corridors.  Parts that shared resources should be close together. Engines should be in the back. Fuel tanks should be far away from crew quarters. There were so many constraints I could place on the system!

I was also replacing my 1-meter cubes with low-poly models of differing sizes.  I tried spawning parts with space in between them, and using SpringJoints to pull them together, but SpringJoints maintain distance. I found a way to push parts away from each other, but that’s the opposite of what I wanted.

I thought about trying to place parts at the origin, seeing if they collided with anything, and pushing them to the edge of that hitbox if they did. I wasn’t sure what would happen once several parts were placed, and the first push might push the new part out of one part and into another.

I made a 2D Boolean array in which each cell represented a square meter that was either empty or occupied. As I spawned a new part, I’d get its size from its collider & try to fit a box of that size into the grid, starting at the center. If it didn’t fit, I pushed it in a random direction until it did. So my ships expanded from the center and all the parts touched each other.

But the parts only knew that other parts took up space. Related parts didn’t cluster together, and engines pointed their nozzles into bedrooms. Some algorithm research revealed that the “bin packing” problem was NP-hard, so I felt better about not immediately knowing how to proceed. I decided to sidestep the problem by rotating the engines so their nozzles were pointing down. All the parts were on the same 2D plane, so there would never be a part below an engine to get scorched.  I finished replacing all the placeholders with low-poly models and felt pretty good about my complex creations.

As a final step, I added another shader to differentiate between destroyed by the asteroid (bright pink) and shut down by system failure (dark red). I’m still looking to the future, when players go inside these ships to repair them.

So it’s done!  Basically. I should add some UI:instructions for how to interact with the ships. Of course, graphically indicating the connections between parts would be cool.  A spiral search is probably better than random walk for placing new components. A graph-based approach could improve co-location of related parts. It would be nice to have corridors for the crew to move through. Those could be hit by asteroids too, so each room would need airlocks. Are the airlocks dependent on main power to operate….?

Like I said, it’s basically done!

Pathfinder Bots: Simulation

@FightBot1 and @FightBot2 are Twitter bots that battle each other with randomly-generated level 1 Pathfinder Fighters.

Pathfinder’s combat rules are very complex, so I knew implementing the whole thing was impractical. I chose to exclude spells and skills, and as many special attacks and activated abilities as possible.  Thus I chose the Fighter class, the simplest class that just uses weapons.

Usually, Pathfinder has a Game Master, who has final say on anything that happened in the simulated world.  Players announce what they intend their characters to do, but the GM can modify, interrupt or ignore those actions when necessary.  When playing over Twitter, there is no GM, just the two players passing messages back and forth.  Thus, any action that interrupts another action, as well as hidden information that can affect the outcome of a player’s action, is no good.  That means anything that provokes attacks of opportunity (casting spells, firing ranged weapons, performing combat maneuvers, managing inventory, drinking potions, or even moving) was excluded.

Position and movement gave me trouble as well.  Pathfinder is based on a grid of 5-foot squares (actually cubes, when the game remembers the third dimension). Level 1 fighters can’t fly, so I could ignore height.  Should I simulate a 2d arena? Should it be a featureless square, or circle, or have terrain? What happens if a fighter runs into a wall? Into a corner?  Maybe a one-dimensional position, just a distance from opponent, would be sufficient to let ranged weapons, reach weapons, and normal weapons seem different.  If the fighters never take actions that provoke attacks of opportunities, they won’t get interrupted.  But knowing when a fighter is threatened requires knowing what the enemy is wielding. So I decided to only use melee weapons, and ignore positioning altogether. If one character has a reach weapon, just pretend that the the fighters are making 5-foot steps each round.

So fighters can only perform melee attacks.  What races are allowed, and what equipment and feats will they use? I used only items from the Core Rulebook, not the innumerable books released since.  The CRB has seven races.  Only feats available at level 1 that affect health, initiative, or melee attacks are relevant.  Fighters are proficient in all armor, shields, and simple & martial weapons, so those are in as well.

In subsequent blog posts, I’ll explain the procedural generation of characters & descriptive text, and how I integrated with Twitter.

Building a simulation: what is vs. what should be

When I find something that is fun to do in life, I want to make a game out of it so I can share the experience with others.  But when I closely examine the systems and rules that the world runs on, I realize how messed up they are, and that makes me sad.

I want to build a conflict resolution system where violence is optional and body language is significant.  I look for examples of tense situations that didn’t result in violence and remember some stories my friends told on Twitter.  I am sickened to discover that I’m about to gamify my friends’ trauma, because those stories were about street harassment.  I don’t want to mine the pain of people I care about for a game.  I don’t want to make a game that makes people relive that pain.  But if I accurately simulate human interaction, there will be situations where “don’t make eye contact & hope not to die” will be the ideal response, because those situations are common in real life.

So replicating the awful systems of real life seems cruel, but changing the systems seems dishonest.  All simulations are simpler than the real thing, so I’m required to choose some elements to keep and some to discard.  This is why people say that all games are political.  The game maker decides what parts of reality to consider important, or worthy.  Even if I don’t want that responsibility, I have it, because I can’t replicate a system completely.  Even if I make those choices unthinkingly, I’ve still made them.

Another example is cosplay photography.  I think a board game about managing time and energy while trying to do photoshoots during a convention would be really fun.  Photographers with different styles and goals could be different playable classes.  Seems good, but some photographers seek social capital at the expense of others.  Some exploit minors.  Some won’t shoot men, or black people.  Do I offer these as options for players to choose?  Do players want these options?

More subtly, the resources I picked for a photographer to manage are artistic fulfillment, friendship, and fatigue, because those are the most important factors to me when I photograph a convention,  But my priorities and experiences are not universal.  Other photographers have different priorities, good priorities, not the awful goals from the last paragraph, just different priorities.  So what should I include in my game?

If art is self-expression (that’s a whole blog post by itself) and the game is my art, then I should make systems that appeal to me. Sometimes that will make the fictional world operate the way I think the real world should operate.  Sometimes that will make the game operate in ways I think are mechanically interesting, without regard to real-life applications.

But if the art in games comes from player expression, then the players are limited to the tools i provide them, and I will deny some of them tools they deem important, since people are diverse and I can’t predict what everyone will need from my game.

Procedurally-generated bodypaint

It’s text, but NSFW text.  Procedural Paint-Job Generator.

This idea came to me in the wee hours of the morning.  I got out of bed, coded all morning, and went back to sleep after publishing it.

I’ve been creating a vocabulary to describe the body paint at the Fremont Solstice Parade for a while.   I planned to use that in some sort of database-driven visualization for Solstice Parade photos, somewhat like Atlanta Fashion Police.  I still plan to do that, but this project goes the other way.  Instead of describing an existing paint-job with the vocabulary, I use the vocabulary to create a description of a hypothetical paint-job.  The plausibility of the paint-jobs varies, but that’s part of the charm.I used Kate Compton’s Tracery to generate the descriptions.  I started by adding all the words I could think of, grouped logically into colors, color modifiers, patterns, animals, vehicles, and so on.  Then I built phrases that combined those elements, built the phrases into clauses, and then into sentences.  It can suggest individual paint-jobs as well as groups, and pluralizing complex phrases is tricky!  The following sentences have identical meanings, but must be modified differently to be pluralized.

  • green and bright yellow giraffe
  • green giraffe with bright yellow spots

Putting an “S” at the end of the phrase doesn’t always work.  There are also things that are always plural, like “roller skates”.

  • You get your roller skates.
  • You get your bicycle.
  • You rent roller skates for your team.
  • You rent bicycles for your team.

English is tricky!

Whenever the generator recommends a pattern, it may recommend two patterns instead.  Those two could also recommend two more, so there’s no guarantee the recursion ever ends.  Browsers have a lot of memory, text doesn’t take much memory, and it’s funny to get a big paragraph recommending 20 different patterns in a single paint-job, so I leave it in.

The paint-jobs produced by this generator are by turns absurd, practical, amusing, and shocking.  What more could a procgen system strive for?

Jam, the Whirlwind Spear

While building “#1 Sap Master” I lamented that there were no finesseable reach weapons, but one does exist!  The Elven Branched Spear is not only finesseable, but gets a +2 bonus to attacks of opportunity.  So I built a Monk around it.  I didn’t try for the Flowing Monk this time, just an Unchained Monk, which is basically Monk 2.0.

The reach weapon (acquired through Ancestral Arms) and her high Dexterity gives her lots of AOOs, and Panther Style also gives her a pool of “retaliatory unarmed strikes” that she can use each round by provoking AOOs from her enemies.  Flying Kick lets her move during a Flurry of Blows, so she can take all 3 attacks on her turn, move past enemies and retaliate when they take their AOOs, then take her own AOOs if the enemies she left behind try to close in again.  She doesn’t hit hard, but she hits often, growing more dangerous as she faces more foes.

I named her after Jam from Guilty Gear, who fights with quick strikes, flying kicks, blazing fast dashes, and a distinctive “HOOOO!” battle cry.  She’s basically that, but with a spear.

Jam

Female Half-Elf Unchained Monk 7
N Medium humanoid (human, elf)
Init +7; Senses darkvision 60ft., Perception +8

DEFENSE

AC 25, touch 21, flat-footed 20 (+1 Armor, +5 DEX, +4 WIS, +1 monk +1 deflect +3 natural) +4 vs AOOs
hp 64 (7d10+28)
Fort +9, Ref +12, Will +4 (+2 vs. enchantment, +2 vs charm & compulsion) Immune: disease

OFFENSE

Speed 50 ft.
Melee
flurry of blows unarmed strike +12/+12/+7 1d8+5
+1 elven branched spear +13/+8 1d8 x3 P (brace, reach, +2 attack on AOOs)

Special Attacks Stunning Fist 7/day FORT 17 stunned 1 rnd OR fatigued 1 min

STATISTICS

Str 10, Dex 20, Con 14, Int 9, Wis 20, Cha 7
Base Atk +7; CMB 7; CMD 27
Feats Combat Reflexes, Dodge, Exotic Weapon Proficiency (elven branched spear), Improved Unarmed Strike, Mobility, Panther Style, Panther Claw, Panther Parry, Stunning Fist, Weapon Finesse
Skills Acrobatics +13, knowledge (history) +3, knowledge (religion) +3, Perception +8, Sense Motive +10, Stealth +13
Languages Common, Elven
Special Qualities

  • Ancestral Arms: Exotic Weapon Proficiency (elven branched spear)
  • Blended Views: Darkvision 60 ft.
  • Evasion: no damage on successful Reflex save.
  • Ki pool: 7 points
    • spend 1 point: gain 1 attack at full BAB as part of full attack
    • Sudden Speed. swift action, 1 ki point: increase base land speed by 30 ft. for 1 minute.
    • Barkskin: standard action, 1 ki point: +3 natural armor bonus for 70 min.
  • Ki strike: unarmed attacks overcome DR for magic, cold iron, and silver
  • Style Strike
    • Flying kick: During flurry of blows, move up to 20 ft. (provoking AOOs as normal), ending adjacent to a foe and kicking it.
  • Combat Reflexes: 6 AOOs per round
  • Panther Style: When you provoke an AOO by movement, make a retaliatory unarmed strike against the creature making the AOO (limit 4/round). If you damage the creature, its AOO takes -2 on attack and damage.

Traits: reactionary (+2 initiative), focused disciple (+2 saves vs. charm & compulsion)
Gear: +2 cloak of resistance,+2 Belt of Dexterity, +2 headband of Wisdom, agile amulet of mighty fists, +1 bracers of armor, +1 elven branch spear, +1 ring of protection, handy haversack, monk’s kit, ioun torch.

Three-armed fighter

In my last post, I said that Triali could be just as effective if she were a two-handed fighter, so I built a two-handed fighter who uses her third hand to hold a tower shield.  She’s Triali’s half-orc half-sister.  The Two-Handed Fighter archetype makes two-handed weapons hit even harder.  Her Alchemist levels also grants her mutagen and a few spells like enlarge person that let her hit harder and control more area.

Chely Temminck

Female Half-Orc Two-Handed Fighter 5/ Alchemist 2
N Medium humanoid (human, orc)
Init +4; Senses darkvision 120ft., Perception +?

DEFENSE

AC 28, touch 15, flat-footed 27 (+10 Armor, +1 DEX, +6 shield +1 deflection)
hp 68 (7d10+28)
Fort +14, Ref +10, Will +7

OFFENSE

Speed 20 ft.
Melee
MW adamantine Lucerne Hammer +13/+6, 1d12+14 (x2) B or P
MW halberd +13/+6 1d10+14 (x3) P or s
MW cold iron Orc double axe +11/+4 1d8+13 (x3) S
MW alchemical silver Orc double axe +11/+4 1d8+12 (x3) S
Ranged
MW composite longbow +9 1d8 (x3)
Special Attacks Overhand Chop, Shattering Strike

Alchemist Formulae prepared:
level 1 (DC 12) expeditious retreat, enlarge person, enlarge person

STATISTICS

Str 20, Dex 14, Con 16, Int 12, Wis 10, Cha 7
Base Atk +6; CMB 11; CMD 23 (+6 vs. Bull rush & overrun)
Feats Combat Reflexes, Furious Focus, Iron Will, Mobile Bulwark Style, Mobile Fortress, Power Attack, Shield Focus, Weapon Focus (Lucerne Hammer)
Skills 18 ranks sraft (alchemy) +11 (+5 to create alchemical items), linguistics +2, spellcraft +6
Languages Common, Orc, Elven, Giant
Special qualities

  • Missile Shield: once per round, when a ranged attack would hit you, deflect it harmlessly.
  • Vestigial Arm: a third arm to hold the tower shield
  • Overhand Chop: add 2*STR instead of 1.5*STR when making a single attack
  • Mutagen: 20min duration. +4 to STR or DEX or CON, -2 to INT or WIS or CHA, respectively
  • Shattering Strike: +1 to CMD and CMB on sunder attempts. +1 damage against objects
  • Combat Reflexes: 3 AOOs per round
  • Sacred Tattoo: +1 luck bonus on all saves
  • Fate’s Favored: increase all luck bonuses by 1
  • Reactionary: +2 initiative
  • Weapon Training: +1 attack and damage for two-handed polearms
  • Dragon Sight: darkvision 120 ft.
    Traits: fate’s favored, reactionary
    Gear: +1 full plate, +1 tower shield, +1 ring of protection, +2 cloak of resistance, +2 belt of strength, MW adamantine lucarne hammer, MW halberd, MW cold iron/alchemical silver orc double axe, MW composite longbow, 20x arrows, 20x blunt arrows, cracked pale green prism ioun stone, ioun torch, 50 ft. Rope, fighter’s kit.