S&box, the long-awaited successor to Garry's Mod, arrived on Steam this week with a serious problem: the community is flooding the platform with artificial intelligence-generated content, and players are noticing.
The new game, built on a modified version of the Source 2 engine after years in development, is designed around player creativity. Users create mods, games, and experiences to share with others. That vision is being undermined by low-effort AI content swamping the platform, with Steam reviews currently sitting at mixed status. Negative user reviews specifically call out the problem, with one noting the game has become "AI-infested."
Garry Newman, who created the original Garry's Mod in 2006, is not happy about it. The creator told Rock Paper Shotgun that Facepunch Studios intends to take action against what he calls "obvious AI-created slop."
"Low quality, obvious AI-created slop is going to be a growing problem in every creative outlet," Newman said. "We don't encourage using AI to be creative. We don't encourage using AI to create games for you. But we do acknowledge that it's a good learning tool and it's a good productivity tool. We'll be taking action to promote human creativity and push obviously AI-created slop off the main page."
Newman's response suggests Facepunch will filter rather than outright ban AI content, reducing its visibility to users browsing the main feed rather than removing it entirely. How strictly the studio will enforce this remains unclear.
The timing adds another challenge. S&box creators can now publish their work as standalone games on Steam through a partnership with Valve, but that integration could amplify the AI problem across the broader platform if the issue isn't contained. The original Garry's Mod has remained successful since 2006 in part because it kept its ten dollar price tag and built a strong creative community, but S&box now faces the moderation headache that comes with rapid scale and AI tools.
Some user complaints also mention an influx of NFT and cryptocurrency activity, suggesting multiple unwanted communities are trying to establish themselves on the platform. That overlay complicates any enforcement strategy, since banning one type of content could look selective if similar bad-faith behavior goes unchecked elsewhere.
The real test will be whether Facepunch can actually distinguish between legitimate AI assistance and lazy, mass-produced content in a way that feels fair to creators and satisfying to users. As AI generation tools become easier to use, keeping a creative platform clean becomes harder, not easier.
Author Emily Chen: "Newman's statement sounds reasonable, but execution is everything, and AI slop detection at scale is a mess no studio has solved yet."
Comments