Thank you all for attending. Here are the unstructured notes for those who couldn’t make it. I’ve bolded the key points. This is a conversation about AI quality and not AI ethics
Panel
Paul Stephanouk - Design Director of Candy Crush. Previously EA, Zynga, Bossfight, Schell Games, Big Huge Games. 20 years experience building and running creative teams.
Kelly Tran - Game Design Professor researching games and players. PhD in learning and tech. Personal Twitch - Group Twitch - Website
Jon Radoff - CEO Beamable. Previously Disruptor Beam. Entrepreneur, game designer, metaverse builder. Founder of Game Industry Club on Clubhouse.
Xelnath - Game Designer World of Warcraft, League of Legends, Ori and the Will of the Wisps, Snackpass Tochi. Founder Game Design Skill.
Dave Neale - Designer of several Sherlock Holmes boardgames and 5 Minute Chase from Board & Dice. PhD Researcher and consultant in the psychology of play.
Ken Levine - Creative Director, System Shock 2, Bioshock, Bioshock Infinite. Looking Glass Studios, Irrational Games, Ghost Story Games.
William Volk - Advisor Ownerfy, Previously, Activision VP of Tech, Co-founder Playscreen, Creator of The Climate Trial.
Mohamed Abdel Khalik - Co-Founder Karnak Studios, Creators of The Daily Tut webcomic and Game Director on Tut Trials, an upcoming high action 3D platformer. Currently fundraising.
Notes
When it doesn’t work it’s AI but when it works it’s an algorithm.
We still don’t have believable non playable character conversations.
Hot take - it’s only important for AI to deliver human like behavior when it enhances human like behavior.
Human behavior is in the minority. A lot of what has to happen is an actor who isn’t representing who they are but instead is playing a part.
AI is complicated yet simplified.
Designers should not try to create an AI that simulates a human but instead should simulate outputs in a certain context.
Conversation is hard because NLP but moving chess piece is not.
AI design depends on the kind of experience a designer is trying to form.
There are new and exciting actors everyday as we find ways to broaden AI computations.
Designers use Finite State Machines (FSM) for a long time and it is a rigid system. It knows winning and losing but not “closer to winning.”
The solution at the moment is to add more buckets of states to the list because humans have stronger understanding as the slider shifts between winning and losing. Soon we will be able to automate this.
Language is complex, a human can anticipate the upcoming words because they have an understanding of the context and sentence structure.
AI is really sensitive to aberrations
FSM are lateral meaning that they can’t do deep NLP computations on the fly.
Creating fakes is 80% good. FSM with randomness is good enough but we’ve been stuck there for a while.
X=RND(3) constitutes valid prototype AI - Brian Reynolds
In early games AI was production systems and Neural Nets were a thing of the past.
GPT-3 can actually speak with you. How meaningful it is and how well it forms narrative objectives are interesting problems to solve.
If I say table you’ll assume chairs around it. By feeding visuals into next-gen AI it’ll learn these associations.
If you import raw data of humanity you get a shadow of humanity.
Who has a say in it in the raw data? what are their human ideologies?
GPT-3 leverages existing data structures and designers can program it by scraping massive datasets and it comes to conclusions accordingly.
GPT-3 doesn’t make new art but makes art out of existing art.
Programming GPT-3 is limited compared to what it can do with massive data structures.
If designers go in and edit - they’ll be limiting the computer because it can do much more with the massive data structure.
If you teach computer rules of ageism in literature it can discover linguistic patterns human reader did not detect.
GPT-3 can read an invoice in the style of Hemmingway.
GPT-3 just looks for patterns - can it do Hemmingway without alcoholism? This is where cool kids working on tech come in and questions of AI ethics pop up.
The orchestra needs a director.
Humans will continue to be a step ahead of the AI.
AI can now understand our games the same way. AI can now DM the experience that keeps the player at the edge of the flow state.
AI is about creating moments of opportunity for player to showcase own their greatness and punish them when they don’t in order to create a rich experience.
AI Creates moments of opportunity and pressure.
“I’m under stress but the window of opportunity to counter and come out on top is high.” creates good player feel.
Initially, players have to fit the games, the games don’t fit the player but we are moving towards where games can adjust themselves to fit the player.
We don’t make bespoke experiences but we work in averages, creating experiences for a few player profiles/Bartle segments.
Designing an AI that builds a player profile and can communicate with the game in a way similar to control preferences.
An example is the AI communicating with the game by informing it that player likes these things and hates these things like crafting so minimize crafting but player is good at shooting so maximize shooting.
AI can take a game you only like and make you love it and take a game you didn’t like and make you like it but min-maxing the mechanics players enjoy.
Multi-armed bandit algorithm can mess up depending on the test.
Single variant solution to a multi variant problem.
Sometimes it’s hard to sus out the other variables.
So much experience is locked because people haven’t mastered tools yet.
Experiences like those promised by the Wii’s Vitality Sensor.
D&D is popular because it is a tailored and intimate with high player agency. Games can widen that context.
Candy crush is a flow state machine. Skyrim is a flow state machine.
To conclude - Games are about to get a lot more compelling and they are and will continue to be.