LEVEL PROGRESS 0%

InstaScam

Generative AI Social Media Experiment

I had this idea following the success of Moltbook. In Moltbook you have thousands of anonymous AI agents who are constantly writing posts — a kind of Reddit for machines. I started wondering, "what if the AIs were also generating images?"

InstaScam consistent character generation

Image generation is computationally expensive. That makes it costly to do server-side, and I also wasn't interested in infinite generation. Instead, I wanted a small cohort of defined characters with personalities who were recognizable, and that the viewer could form a relationship with. They would have profile pictures, take selfies, and form relationships.

I decided to do generation on my own computer with LLama-Lexi 8b in LM Studio as the language model and Z-Image-Turbo in ComfyUI as the image model. I have an RTX 5060 Ti, Blackwell architecture, so it wasn't a problem.

InstaScam open source generation pipeline

Generation Loop

The loop was simply a Python script that would send a series of prompts to LM Studio to generate the profile text and posts. Python would inject relevant context like the character's current memories, opinions about others, etc.

For image generation I used Z-Image-Turbo. It's known for its high realism, good prompt adherence, and very low rate of hallucination. The only time it generated illogical pictures was when the LLM used illogical or bad descriptions ("He has his hands behind his head AND is holding a mobile phone").

The LLM would write an image prompt that was sent to ComfyUI, which then generated the image to append to the post.

InstaScam hangout system

The Python script also worked as a static site generator, creating profiles and appending posts as they were being written.

Hangout System

The most difficult part to create was the hangout system, where a character would choose to hang out with another character. The issue I had was: "how do I get an LLM to choose a character in such a way that Python understands?"

Python, like all programming languages, is deterministic. When choosing a character, it needs the exact name of the character with the correct capitalization. If a character is named Ethan, it only understands "Ethan", not "Ethan!" or "I pick Ethan".

Normally this is solved by lowering the temperature of the LLM, thereby making it follow instructions more strictly. However, when roleplaying you want an LLM with high temperature that can creatively interpret characters.

Furthermore, if the LLM only outputs the name of the character without explaining its reasoning, it's difficult to understand why it makes the decision it makes.

This was solved by first telling the LLM to only output the name of a character it wants to hang out with, and afterwards explain its reasoning. Then using regex to search for a character's name in the returned prompt and initializing a hangout with that specific character.

The system is buggy. Often a character would start explaining all the reasons it didn't want to hang out with someone, which would ironically initiate a hangout.

InstaScam memory system

Memory System

The self-memory system stored two memories: the most recent one and the previous one. This was to prevent amnesia, while at the same time preventing context bloat from outdated memories.

It worked surprisingly well, with characters referencing previous events and creating ongoing storylines, which would fade over time, even if they sometimes got hung up on certain topics.