Monday, July 21, 2025

Proposal: Strengthening LLM prohibitions

In the Community Guidelines, change the text of Section 4 to read as follows:

It is generally preferred, except where explicitly permitted by the dynastic ruleset, that players avoid any use of generative AI or LLMs in game contexts. This includes non-gamestate content such as dynastic banners or dynastic history text, as well as the use of LLMs to process or analyse data generated as part of the game.

 

Aside from any gameplay considerations, I do have some copyright concerns, and think we should be cognisant of the potential for other players not to wish for their data to be fed into the plagiarism machines.

Comments

JonathanDark: Puzzler he/him

21-07-2025 14:12:21 UTC

for

Chiiika: she/her

21-07-2025 14:32:59 UTC

for

Kevan: he/him

21-07-2025 14:57:59 UTC

Not sure about this. I’m behind it morally, but if a player wants to ask an LLM to process or reformat some gamestate because they don’t have the ability to do that for themselves in Python or Excel, I’m not sure that it’s my place as a BlogNomic player to say that they can’t and (since it will probably give garbled results) shouldn’t.

Ruling out AI banner images? Yes, BlogNomic should take a view on whether its own website supports that aesthetic. Banning AI-generated text? Yes, Nomic is a game of discussion and we want to know whether posts and comments are being written by humans. But using AI tools to make or support decisions might be a bit outside of the circle.

jjm3x3: he/him

21-07-2025 15:27:19 UTC

I agree completely with everything that Kevan said and because of that I am leaning against

JonathanDark: Puzzler he/him

21-07-2025 16:03:03 UTC

Good to see you again jjm

Josh: he/they

21-07-2025 16:04:52 UTC

I guess the disagreement I have with Kevan is the idea that it’s python, Excel, chatgpt or nothing. This dynasty I have exclusively used pen and paper and while I’m not likely to win I do have, at a rough estimate, around 20% win equity. The thing that differentiates winners from losers is effort, and I don’t think we should be treating access to the often-wrong shortcut box as privileged.

Plus the IP thing. I really don’t like the idea of someone feeding my stuff into chatgpt; I understand that it’s the public internet and the Corpus will eventually scrape it all anyway but why make it easy? I would rather have an opt-out.

Josh: he/they

21-07-2025 16:05:21 UTC

Oh and yes: hi jjm! Good to see you again 😊

Vovix: he/him

21-07-2025 16:23:39 UTC

against LLM tools are a very broad category, while it’s reasonable to ban AI-generated game content (in a “writing dynasty”, for example, generating text with AI feels like cheating, just like you’re not really playing chess if you feed a board into Stockfish and do what it says), regulating personal use seems outside the scope of the game. Like, people have written scripts to generate a sequence of moves, or analyze data, and that’s generally seen as legitimate play. And if someone wants to use ChatGPT to clean up the wording on their proposal or otherwise support *their own* writing, I don’t have a problem with that.

Now, there’s an interesting gray area in my opinion with regard to something like looking for scams. If you throw the ruleset into an LLM and tell it to look for loopholes, that feels like the computer playing for you, but again, how is it philosophically different from using a solver, spreadsheet, or custom script to optimize resources and actions? The only difference seems to be accessibility, modeling out game actions requires some math/data analysis/programming skill of your own, while anyone can use an LLM. But is that a reason to ban one but not the other?

As far as data collection goes, I think that’s ultimately futile. It’s a public site on the Internet, it already gets scraped, indexed, and spidered, and I wouldn’t be surprised if my BlogNomic posts are already in a training dataset somewhere.

You must be logged in as a player to post comments.