Sunday, June 5, 2016

Emily Short: The Mary Jane of Tomorrow

Emily Short: The Mary Jane of Tomorrow

http://ift.tt/1WB8wVc

mj_of_tomorrow.jpgFor IF Comp 2015, I offered as a prize to contribute a piece set in the same universe as the author’s game. Steph Cherrywell chose this prize for Brain Guzzlers from Beyond!, which was exciting, since I’d enjoyed Brain Guzzlers a lot; and also slightly daunting, since Steph obviously didn’t need any help in coming up with art or feelies. Originally I was going to write a short story, but as I replayed the game and reviewed transcripts, I was hit with an idea for something more interactive. The result is The Mary Jane of Tomorrow, a not-too-difficult parser puzzle game set a few months after the events of Brain Guzzlers. (Estimated play time roughly 45 minutes, give or take.)

In the tradition of fanfic, it focuses on the relationship between a couple of the characters in the original game, Mary Jane Minsky and Jenny Yoshida. In canon, their closeness is demonstrated in various ways but never given center stage.

Gameplay-wise, The Mary Jane of Tomorrow is about training a robot to demonstrate certain personality and knowledge traits. To do that, the game makes extensive use of procedural text, borrowing the text generation library and even some of the corpora I used for Annals of the Parrigues. After the fold, I’ll talk about Mary Jane as a procedural text project, but it’s spoilery, so you probably want to play it first if you think you might enjoy it.

Steph decided she wanted to share her prize with the public, so The Mary Jane of Tomorrow is now available to play — and she even very kindly made some cover art for it, to match up with the rest of her work.

The game’s been uploaded to the IF Archive; in the short term, there’s also a Dropbox link for it, which I’m hoping will hold up until the file moves out of Archive Pending.

For some time I’ve been experimenting with a text generation system that makes use of a bit more persistent state than Tracery, but otherwise shares a lot of the same characteristics, especially the ability to nest symbols for complex expansion. When I considered a game about training a robot, this felt like a natural fit. The player’s actions would change the tags that the robot was using to select dialogue, and then it would dynamically produce output that matched its current training profile. Some of the outputs the game needed were:

Procedural recipes (requires cookery training): The first procedural thing I did in the game was create a generator for 1950s recipes, based off some websites that divulge the horrible secrets of cookbooks past: “Spam and Pimento Olive Surprise” or “Chipped Beef Loaf in Jello,” this sort of thing.

Procedural beat poetry (requires poetry training as well as the ennui attitude marker): In Brain Guzzlers from Beyond!, there’s an extensive puzzle around making poetry that will satisfy (or irritate) a beat poet. She gives a bunch of examples of what that kind of poetry will look like, which is itself pretty much a satire of the form. So I started experimenting with what a generator would do if it produced something similar to this.

Procedural romance couplets (requires poetry training and rhyme training): If your robot is trained in rhyme as well as general poetic ability, it will generate couplets based on cod Byron. I have a love-hate relationship with Byron: some of his poetry I quite like, some I consider nearly intolerable. To a Beautiful Quaker struck me as particularly susceptible to procgen satire, since there’s not really a strong through-line development of narrative or metaphor, and mostly the pairings just needed to rhyme.

Procedural limericks (requires rhyme training but a lack of poetry): these were tricky to do. Limericks require both rhyme and meter, and I turned to Twitter to suggest some word corpora with metrical information. Two sources proved especially useful: a corpus that lists words according to foot type (all the anapests you can eat!) and Allison Parish’s scripts for getting rhyme and meter matches out of the CMU pronunciation dictionary. I was also recommended this source of prosodically annotated corpora.

Ultimately, as interesting as it was to work with the constraints of metrics and rhyme, I feel like limericks were too constrained to make the most of procedural serendipity. It was possible to remix the first line a lot of ways — the first line of a limerick is usually basically a throw-away that just establishes the rhyme scheme for the rest of the verse. There was once an old man from Peru. But the rest of the limerick has to get a lot of work done in a short time: set up a situation, elaborate that situation in an inner couplet, then resolve it with a punch line. Getting good results required exerting so much fine control over output that it didn’t feel that procedural any more. I may take another run at this problem sometime, though.

Procedural freeform compositions (poetry training, no ennui): if the robot doesn’t know about rhyme, does know about poetry, but has low ennui, it will compose poems of three to five lines based on a couple available themes. One of the standard structures is Address-Simile-Proverb, so one gets output like

Listen up, creditor
Perform ecstatic dances at the altar of my wealth!
Shrouds have no pockets.

or

Beware, dearest soldier
Thy heart is a coin!
A sum entrusted to many servants diminishes for the master.

The proverbs are scraped from various online proverb lists, though I also had to cull the list a lot: when I first dropped the raw text in and fired up the game to test it, I was horrified to be presented with a poem whose last line featured a racist slur. Turns out people have come up with some extremely icky proverbs! I’d already done some reading about creator responsibility for procgen output, but this was a vivid reminder.

Or, in a slightly more complicated example:

Listen up, creditor
Pickle my wallet in vinegar!
The best things come in potting jars.
I will not complain.

In this case, the selection of “creditor” in the first line set the theme of the poem to money, which determined the choice of “wallet” as the thing to be pickled in line two. The fact that the system picked “my wallet” rather than “your wallet” then added “I will not complain” to a stack of possible concluding lines; and the selection of the pickling instruction (as opposed to some other instruction) added “The best things come in potting jars” to that same stack.

I don’t claim that these are great works of poetry, but I felt that, in comparison with the limericks, they more often came up with an interesting juxtaposition that I hadn’t already thought of myself.

Procedural similes: to feed both Mary Jane’s speech and some of the poem forms, I have a subsystem that makes similes. Some of the simile corpus is pulled from web pages that list common idioms. The rest is based on searching BYU n-grams lists for adj-noun pairs such as “cold case” or “hot coffee” and then whittling that list down to the ones that struck me as the most memorable, proverbial representatives of their chosen adjectives.

Filters: the other thing I did was falls into the category of Cheap Tricks. The text converts “ing” to “in'” if the robot has learned about cowgirls; “the” to “ye” if the robot has read Ye Goodwyfe’s Bodice Overfloweth. There are drop-in alternatives for some other very common phrases.


Tagged: steph cherrywell, text generation




Gamebook blogs

http://ift.tt/1WB8wVc
via Planet Interactive Fiction http://planet-if.com/

June 5, 2016 at 04:27PM

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.