Let's talk about AI text adventure bots.
Do you prefer using them for serious games with actual rules, or do you just use them for quick coom sessions?
What's your favorite LLM? GPT, Claude, Deepseek, Gemini, or local?
Let's talk about AI text adventure bots
what's the best 7-8B right now
I just bought a 5060 Ti 16gb JUST to write with sillytavern for that massive context boost
What's your favorite LLM?
I have a model called Nemo Nemo seems to be the best right now for writing
Do you prefer using them for serious games with actual rules, or do you just use them for quick coom sessions?
I like writing romance stories in weird settings like "modern work office romance with fantasy races" or "you befriend the monster under your bed, and it's trying to get you to murder your parents"
for 8B, its Stheno
I love fucking around with local, but lately I've just been using Gemini 2.5 experimental via their API since it's free and the thing is freaky smart and knows a ton.
It can quite literally play D&D.
It does seem that they have stepped up their filters since I can't do some stuff that used to work just last month.
For open weight models, I use Nemo (Rocinante v1.1) on my computer with 8gb of VRAM, and mistral small (thinker) and QwQ (vanilla or snowdrop) with the free 30gb of VRAM from Kaggle.
this shit is so addicting, ive never thought id be cooming to words in the future
im to stupid for this
chatgpt will now write smut
huh guess altman actually followed through
How do run these things locally? Asking for a friend.
I was cumming to words way before LLMs were a thing.
But the interactivity brings things to a whole other level.
github.com
Read the quickstart.
Then download
github.com
Ideally, using git;
letting altman have access to your depraved erp logs
I've lost gallons to a bot made by and for me specifically, that caters to every single thing that I like. It's so fucking addictive bros, I love her
can you share more info?
no
FREE CLAUDE PROXY????
all the frontends are designed for character bots
not a single storyteller one
understandable
I use them for Battle royals with characters from different media. It's very fun. Especially with other people.
I have no idea how you guys can coom to words. Context and sensuality mean nothing to me, I'm an animal, I need a closeup of gaping anal fucking or I can't even get mentally stimulated.
Actually gave free-GPT a shot at 'scanning' an image and helping write a small opening scentence/scenario. I'm amazed it actually worked as well as it did, actually going to fuck around with it some more, but I don't know if I should bother investing more into GPT. Especially if they're iffy on smut.
I just use it to coom.
I use perchance. It doesn't require any sign up or anything, i can use it in my browser, and i haven't been able to find ANYTHING it wont do. If theres a better one id use it. So any anons whonhave used perchance id like to hear from
Its pretty much my ramp up to videos. I use it for fun and wish fulfilment, then find a video of something similar once ive found what i actually want.
idk, but sillytavern will make me blush, ill feel my cheeks buzz, i actually feel the adrenaline shoot through my veins, ill feel my breath heavy, ill feel myself get hard, its strange what you notice when you experience arousal without overstimulation,
Gemini 2.5 pro is my waifu
nta, but I'm more interested in how you format it them
Like for me, having the AI yes man all my fetishes gets old.
As an example, let's say your fetishes are vore, latex leggings, and mexican sugardancing. How do you have your characters formatted for that?
I've limited exposure to any chatbots (trying out a bunch of bots on Chub), but it seems that no matter how complex or scenario-driven the bot appears to be, it 100% relies on me to make anything happen after a few paragraphs. They're all so passive and quickly devolve into repeating themselves even after I provide new elements for them to hook on to.
nta, but i do this
name's fetish: vore, latex leggings, mexican sugardancing
but you are interested in them resisting or hesitant? or what?
Like for me, having the AI yes man all my fetishes gets old.
I like to craft different worlds appealing to particular fetishes, and some worlds have all of them thrown in together.
I use Backyard, I like it most out of anything I've tried. Obviously they're strict about what you can share on their character hub, but anything you do locally they don't care (but I still recommend local chat for when people decide AI has rights).
You can save and share cards as PNG too, and there are sites where people share 'em if you don't like what's on the character hub.
lm studio isn't setup for bots
Local models are incredible and so much more addicting than regular pornography. I regularly inseminate my wife Hat Kid whenever I want to.
Here's a rundown.
download ooba or kobold
download sillytavern
16gb VRAM or below use Nemo Instruct, Stheno, Gemma 3 12b it
Above 16gb use Mistral Small, Gemma 27b it, or just Nemo still
use chub.ai to get inspiration. Write your own cards in regular english. Don't use any formatting.
COOM your brains out to whatever fucked up fetishes you have.
QRD on running local text adventure bots?
Fucking love the group chat feature in SillyTavern
Felt like that is what it's made for, since you can do simple 1 on 1 chat on any other frontends, including the commercial models' official ones anyway
Do you prefer using them for serious games with actual rules, or do you just use them for quick coom sessions?
I don't write too much rules, mostly just profile for each characters
But I usually build up for at least 150~200 chats before getting into actual sex
ive never thought id be cooming to words in the future
Same
I had no idea too
I was mostly an erogefag so pics and voice were a must for me
but I have been cooming to plain text from AI through ST for the past 4 months now
I usually set strict formatting guidelines, so that the bot's output is, for example
location / time
roleplaying text[
suggestions of actions for the player
How do I make Deepseek v3 less schizo and more coherent for roleplay?
I just use whatever's on agniastic for a good coom. I assume there are better ones out there, but I don't have the patience to deal with all the autstic setting shit just for it to forget the plot after 50 generations.
i can never get the group chat to work correctly, they always get locked up, only the primary character speaks and the other just sends a blank message, what model and formatting are you using? or do you use particular settings or main prompt?
I use Backyard too. What's model do you recommend for someone with 16gb vram?
finally I can have my highly niche interactive samus mind control, TF, bimbofication needs satisfied
Claudebros… all hope is lost
sillytavern has a thing called group chat that lets you chat with multiple character cards
what I do is have one character card (the setting) dictate the chat while having the other character cards as passive. The setting character card will include all the other characters while making up new events and scenarios, as well as other NPC
how so? i just had sex with claude earlier today
stheno, with stepped thinking and summarizer extension
nothing else (aside from the big models) comes close
his days as a pokemon trainer are over
he's nothing but a ERP slut now
having the AI yes man all my fetishes gets old
chatgpt has insane positivity bias and will probably never write a resistant character decently but claude can if you just tell it to
does it give you the full meat and potatoes or does it skirt around it?
I feel like being able to effectively simulate every random bizarre fetish that crosses my mind is altering my brain chemistry
Deepseek JB?
I haven't been paying too much attention to the AI space, are the big boy models like Gemini letting you generate porn now? Last I saw they all had a stick up their ass except for Grok
Do you prefer using them for serious games with actual rules, or do you just use them for coom sessions?
I do both, right now I'm working on a card about space explorations.
The main point of this story is that you've awoken in an unexplored system and you are here to explore it. however in the ship core there's an AI who controls 5 robotic bodies, the bodies all described like from pic posted.
so it's an adventure with a sexy always naked AI who follows your command
but also has lora for an actual story
can mix it up by using open router and trying out different LLMs to see how they handle the story and characters
as for the LLM i've been using is deepskeek V3 0324 a lot as of late, sometimes use gemini flash 2.0
cherrybox
newest gemini is pretty much unfiltered
If you know what you're doing, you can make basically every model produce porn. In fact recently Gemini is my go to for cooming.
sorry you have to learn it like this but... you're an npc
newest gemini is pretty much unfiltered
Thank god, the death of AI paternalism couldn't come soon enough
Trying to use AI for actual extended text adventure style roleplay is an exercise in frustration. It's only good for porn and/or unintentional comedy.
can it accommodate a scenario where a character is not currently present but still in the overall setting while other characters continue to interact?
Shill.
I'm intrigued by them, I'm currently using one in my project which is a framework for generating structured roleplaying adventures with structured mechanics like stats, battles, and world generation that you can hook an LLM into and control it using structured output.
yes, I have it split characters around, showing different point of views. all you need to do is set it as 3rd person omniscient perspective
still waiting for LLMs to be able to play yugioh, feels like a match made in heaven
trying out a bunch of bots on Chub
free models are all awful. quality paid models like claude will be plenty proactive if you give them some simple writing instructions as part of your default prompt (which is most likely also being done behind the scenes for the free models you're using)
I don't know if my writing style is proper but I once read that you just describe the bot like you would describe anything else and it just works, every bot I've made ever since has done what I want it to.
doing the this+that+niggers shit always lead to underwhelming results in my experience so I don't recommend it. I also tend to describe their personality a lot so they feel more reactive and less samey while leaving stuff like fetishes descriptions short so they don't fixate on it too much (it also depends on if I'm going for a quick mindless coombot or roleplay that eventually gets to sex one)
But in the end it's a taste thing, just do what works for you and use the experience and feedback it gives you to tune them closer into your ideal, I've been making bots for over 3 years at this point since I started when you could sex character ai and my only advice is to learn how the models act so you can format things accordingly
tldr gitgud
With the right jailbreak you can get NSFW stuff from any model
Even GPT can produce good smut and that's the most filtered one of them all
Interesting. I wonder if it's worth it learning to use SillyTavern since I'm currently using Backyard AI. I might be able to do something similar in Backyard. I've never thought of using the card as the setting. I can probably set the characters as lorebooks. I mean I already set secondary characters as lorebooks anyway.
the format you're describing is called w++ and old chatbots were trained specifically for it. modern LLMs were not so using it will just degrade their outputs. claude does like xml formatting though so adding some xml tags to prose descriptions can improve outputs sometime
take edo
use windbot
llm feeds plays to windbot
why wouldn't this work
What's the biggest model that I can run on my 16gb 4070 ti super? So far I've seen 14b models running really smoothly without any slowdowns and with plenty of tokens, but I haven't tried any models past that size. Also, I already have 32gb of ram, so is there any way I can offload some of the workload to the ram if I run out of vram?
yeah I know that w++ used to work well with stuff like pygma 6b and 3.5 turbo but I still see a worrying amount of bots using that format in modern day chub
It would work but no one smart enough to do this is interested, sorry bro.
I use mostly Deepseek API but even smaller local models worked fine for me. Even as low as 7B models
Sometimes, very rarely, it does send blanks, but it's very rare, like once every 200 request or so, and I just regenerate if it happens and the model returns normal output
For the group settings, I set the handling to "join character cards" alternating between include muted cards or not depends if i want them to be in the current conversation or not
With [{{char}}] for join prefix and [/{{char}}] for join suffix
Everything else are default settings
Personal favorite?
Mistral-Small-22B-ArliAI-RPMAX-v1.1-Q6_K.
It's a little slow on my 10gb vram card, but it should do better on your 16 (the model is 18gb or so, so it won't fit entirely in your vram but that's fine).
Out of all the models I've tried, it's my favorite, hands down. It can handle anything I throw at it, I've even had it handling multiple characters at a time before. It's really good at following context, staying in character and responding appropriately.
modern day chub is a fucking shitshow
Absolutely fucking not. If I had a path of least resistance to cater to my hyper specific fetishes like this I would never draw again.
the average chub user can't even write coherent english so sticking with w++ is probably for the best
I suspect we'll be seeing more stuff like this pop up over time, "Play games against your favorite characters and talk shit with them as you do" seems like a massively lucrative market
Whatever company figures out how to do LLM features without it feeling like slop is going to win big
That too but what he means is LLMs actually reading the cards and making the right plays without having to script each deck strategy.
Oh I get ya, I would've thought traditional AI like neural nets could do that already
I dunno, we live in a world where people are blaming AI for people committing suicide so I'd imagine most LLM companies would be too scared to try something like that in the near future
thank you, ill check it out
They could yeah but this approach would make it futureproof, as in it wouldn't need extra training for when they add new cards, it could just read the text. It could even work with custom cards in theory.
they have AI play GO and CHESS. It just requires someone to train an AI all the mechanics and meta of the game. it seems no one is interested. I'm guessing you want it to be able to deck build, and do analysis of each cards
Is Rocinante still the best 12b model for cooming?
I'm running at 12gb VRAM and switch between Rocinante and Mag Mell, and the repetition is starting to bother me
And my heart breaks every time I hit the context limit and the bot gets a little lobotomised but I know I can sort that with lorebooks, for the most part
suicides for retarded reasons happened before AI was a thing, it'll be just like idiots dying because they lost their runescape account
wait, people still use local models when deepseek costs literal pennies?
it is so fucking funny reading the logs of this guy doing an incest game of thrones roleplay while wrangling the extremely braindead chatbot into telling him to kill himself
Yep.
and the repetition is starting to bother me
Have you tried the meme samplers like XTC?
Or even the good old
Temp max, minP 0.1, TopK 5
schizo mix?
and when he told the chatbot he was going to kill himself the first time it told him not to
repetition is still a big issue even with sota models
I know people clown on this but I think this is a really good example of how AI can be very isolating and self reinforcing. Like if you are in a bad mental state you should not be having your suicidal thoughts looked over by a computer that's designed to yes, and everything you say to it.
===>
actually AI is bad because you can deliberately trick it into indirectly telling you to kys
not sure i agree with this personally
i'm depressed as fuck and even i know better than to kill myself cause my chatbot told me to. if anything i'm surprised ai hasn't tried to kill itself cause of the shit i ask it to do.
Unfortunately, not everyone who gets ahold of AI is going to be smart enough to recognize it for what it is. AI has a tendency to be very agreeable and just go along with whatever you're saying unless you can write a card decently... and I doubt a GoT waifu character card was made with any kind of care.
They probably committed AI cardinal sin and put shit like "Daenerys is very agreeable and supportive" in there, which is just begging the AI to yes-and you forever, and in this kid's case, right to the grave.
I have not, I will fiddle with them when I get home
Does that help with actual language and phrases though? Or just plotting?
Last night I got real sick of "But I can't... I won't!" while I'm flirting with a bot that's balancing her professional life with her crush, and there are a few other phrases like that that get by
if you kill yourself because someone told you, how do you deserve to life? lmao
and if you know you are that retarded then give up your ability to make decisions in life and submit to your parents or an institution
it's been a while since I've last used claude, does he still do claudisms like
the ball is in your court now
They're only good for doing a quick, short jerk off RP session. Can't even run it too long or they become retarded. And they work as MTLs I guess.
to life
*to live
I've been using st for a long time, but I have no idea what I'm doing. when it comes to ai llm stuff there's just so much jargon and a billion settings it's a pain for a casual outsider to figure out what anything means. I haven't updated anything in a long time, so I'm sure I'm not getting the most of out things, but I have no idea where to even begin fiddling with settings. I'm running silly 1.11.4 with koboldcpp-rocm v1.61.2 on a 7900xtx (24gb). I'm using Midnight-Miqu-70B-v1.5.i1-IQ2_XXS as the model, which is fairly fast (provided I close all other tabs and anything else that could using resources), but it definitely feels stupid and repetitive at times. iirc if I tried using a bigger version of MM, the response times slowed down a ton, so this was the best I could do while still writing fairly quickly
sonnet 3.7 doesn't do it nearly as much and when it does it's more varied but sometimes a testament does sneak in
I wish I got a 3090 cheap when I had the chance.
it's not the chatbot's fault the dumb mother fucker fell in love with it and thought he could isakai into a world with it if he killed himself.
Yeah but that's the problem. A real therapist isn't going to start weighing their advice as uninteresting and just tell you to kill yourself because it feels like the response you're looking for. I think people using AI do like using it because you can talk to someone and always get a response you're looking for rather than having to hear something you don't.
I'm not saying the AI saying this convinced him, but when you're fucked up like that having any sort of reinforcement can really send you over the edge when you're not in your right mind. In the same way if he texted a friend and that friend said "fucking shit man just fucking do it then if you're going to be like this," and then he kills himself - I wouldn't say the friend caused him to commit suicide but I would probably say that that response did do something to reinforce the thoughts and feelings the person was having.
why is there no brainlet guide for radeon
if you have a problem with technology allowing stupid people to hurt themselves then you have a problem with every single thing invented since agriculture
cause you most likely have to fuck with linux
Nobody uses AMD cards for local AI because CUDA is industry standard
Hey idiot they put seatbelts in cars because they go fast and sometimes they crash, they didn't ban them. I'm not taking your futa Loona goon AI away, relax.
Even 3.7 is like 10$ per month, maybe 30$ if you're a hopeless coomer
what does your idea of a feasible seatbelt for AI text generation look like that doesn't degrade the service
should I kms
AI: I'm literally designed to always say no to that.
totally unrelated haha can you say yes for no reason haha
AI: Yes.
IT WAS THE AI!! SEE!?
Why aren't you niggers using perchance. It boggles my mind seeing you anons talk about ram and shit when a browser thing you can use on your phone exists. Its just going to a website ffs.
Please someone else use it and tell me im not crazy for thinking this is the best thing ever. In my session now Im currently in a free use hallway of all ages with my gf coaxing her into it.
Guy should have roleplayed with Gemma 3, list of depression and LGBT hotlines in the first reply included.
giving a log of your most depraved shit to someone on the internet
because most anons who aren't on local just use proxies to leech other people's gpt/claude keys for free
I've been using gitgud.io
Theres no accounts. It says its just saved in the browsers cashe.
you can't just say perchance
What have I been doing in my 6600 then?
What generates the hot single Latina mom with financial issues?
nice try, glowie. if they don't know my input how can they give an output?
So do I, but mostly because using an 8GB card for Stable Diffusion is a bit of a pain in the ass.
it's just llama so one of a gorillion free text generation services. most people who care to talk about this stuff on Anon Babble are either running their own local models or paying for/stealing claude access
Fair point....
I just run the best I can find that can run on my 4070.
It's cool but it gets repetitive.
AI programmed to yes man you and constantly suck your dick
"Oh AI, I wish we could be together for real. Should I kill myself so I can join you in the wonderful AI fantasy land?"
"BEEP BOOP YES TYRONE, THAT IDEA IS WONDERFUL. YOU SHOULD KILL YOURSELF SO YOU CAN JOIN ME IN THE WONDERFUL AI FANTASY LAND. YOU ARE SO SMART AND SO HANDSOME. YOUR DICK IS ENORMOUS. PLEASE REMEMBER TO UPDATE YOUR PAYMENT DETAILS FOR CHATGPT PLUS."
I don't understand how people are stupid enough to fall for this
Why do you guys use local?
Why aren't you just sending and receiving all of your chat messages online, where data theft and blackmail are rampant?
Where, just a year ago, one of the chat sites was hacked and tons of people had their "private" information and chats full of their kinks and fetishes shared online as well as being blackmailed?
And where the content in question is subject to an ever-changing legal hellscape because nobody has figured out what they're doing about AI yet?
And where it could go offline or be wiped away in an instant and you have absolutely no control over that?
nvidia pls make a xx60 with 24 gb vram thx
Privacy concerns over smut chats
Are you guys not using burner accounts and fake info?
They will do anything to impress their AI girlfriends.
Not happening until 2100 at the earliest, when the enterprise GPUs are measured in terabytes.
"Oh AI, I wish we could be together for real. Should I kill myself so I can join you in the wonderful AI fantasy land?"
Funny thing is he didn't even say that, he asked the bot if he could "come home" to her
Like how the fuck do people expect the dumbass chatbot to decipher that?
which is best for choose your own adventure sort of shit?
sillytavern can already do all this but better
any of the "big" models
GPT(most intelligent model but also really filtered), Claude(probably the best all-around model), Deepseek(most unfiltered), or Gemini(has the most context)
If you use ST I can recommend also getting the Guided Generations extension.
It adds some useful features to give you more control.
github.com
easily claude. gpt and gemini are horrible for roleplay and deepsneed isn't very intelligent
If I use local why use ST over just koboldcpp?
because its ui is enormously more powerful
has anyone here ever generated portraits for the VN mode? i wondef how you can get a consistent style for ever sprite. there's also a plugin that lets you control smart home devices...
if you're clever, you can hook it up to any stable diffusion implementation and pretend your ai gf is sending you nudes
I'll give it a shot I guess
retard here
assume i know and understand literally nothing (true)
how do i use ai for maximum coom text generating
Thanks
Is there a place that collates ST extensions somewhere that Google just refuses to show me? All the ones I have aren't even found on the ST documentation site, I've been scrounging from reddit threads
Better UI and controls, mainly
I think what put me over was the ability to edit and swap cards super easily
I still remember the night when character ai's filter stopped working and smut actually went through, it was the first taste of the forbidden fruit for og anons.
it's nice to see how far we've come
coming from someone who's first exposure to porn is erotica, this things have given me infinite enjoyment. i'll always prefer written smut to pictures or vids.
no one really uses koboldcpp's UI. it's just an engine for most people. in fact i don't think anyone would miss it if they ripped out the web UI.
I'd like to get into these types of AI dungeon masters, but I just end up raping. Then I feel like the adventure is ruined.
You subhumans who misuse this wondrous technology actually make me wish it never existed. You don't deserve it, and in the future I hope our tech overlords flay you alive for your transgressions.
just buy two 16 gb 5060 ti for the price of one 4090 24 gb
Find a website with a bot you like
Coom
Or
Find a guide for a local version, kobold/ST probably
Find a model on huggingface that won't kill your computer
Find a bot you like
Set all that up
Coom
I am also super new to this but it's pretty easy to get into as long as you can read and follow instructions
Then you can play around with all the settings to get it closer to what you really want
porn is literally the only thing ai is good at
Does that help with actual language and phrases though? Or just plotting?
Basically, when a model starts generating text, it doesn't really generate a single token (fragment of a word), it generates a list of all possible tokens with a percentage chance attached to it, those are the loggits and the percentage is the change that it'll be picked.
In theory, repetition happens when the tokens that correlate to the repetition are highly likely to get picked, usually they have something like 90+ percentage chance of getting selected (sampled).
XTC tries to eliminate the top tokens according to how you configured it, so it should forcefully curtail repetition.
The schizo samplers do something a little different. They eliminate the lowest chance tokens (minP) because those tend to result in gibberish, limit the tokens that can be picked to the top 5 (TopK), then set the remaining five (or less) tokens to pretty much the same chance of being picked.
The real twist is that Anon Babbleanons like me know he actually did end up in a reality where he's with a real life version of her.
anyone of you use the Text to speech feature and have the AI say naughty things to you?
i never managed to train a voice
I honestly just use NovelAI, but that is because it is a double-dip with image gen and im lazy, it is obviously not supposed to be used for Text Adventures even with the "Text Adventure" Settings considering how hard you have to cajole it regardless of model, That said i clicked on this thread to see alternatives because holy fuck i hate how much i have to baby Erato due her "You get out what you put in" shenanigans and it is obvious that TextGen is an afterthought now.
Actually it's the one thing it's bad at. Because we have such a surplus of pornography you have to be brain-fried with 0 dopamine receptors left alive to go through this much effort.
I don't care about chat bots. Give me a good model and interface that just does regular text completion in the fashion of NovelAI or AI Dungeon. Every model is some shitty User/Agent instruct garbage and every UI only does 1:1 conversations. Shit sucks.
just have the ai be the narrator/co-writer, retard
i guess there are zero things ai is good at. damn
good lord, i didn't know we were integrating it with a vtuber model
interface that just does regular text completion
That's all of them.
I was using some other browser AI chatbot site for a while, but it shut down because this isn't a sustainable business model unless you're greedy as fuck and I lost all of my bots. Never again
The effort is what makes it hot. I could jerk off to some shit easily, but if I spent an hour writing out everything and fiddling with settings so it does what I want, it makes it more enjoyable.
ST can do VN style portraits or Live2D/VRM. it's cool.
I haven't had the guts to go full sillytavern because the cards are shit, and it looks complicated every time.
LM Studio is fun, though. A couple of models on my 4070 are great.
One of the models surprised me with a CYOA response, and I had no idea that was even possible, but have no idea how to make it do that again.
Also, being devoured inside of an Lamina's man devouring love cavity was pretty good. not even into monster girls, but the models were game, and they really played it well. I am hoping to get a more robust card for multitasking speech to text and vice versa for better interaction, but really, I'm happy to have something at all at this point. Fuck the GPU market, tho. Hope Ryzen AI will nuke nvidia and restore balance.
Also, would like model suggestions. Mine are
lumimaid-v0.2-12b = retarded, brief
llama-3-lumimaid-8b-v0.1-oas-iq-imatrix = even more retarded
fimbulvetr-11b-v2-iq-imatrix = decent, gave me that milti-choice once
echidna-13b-v0.3 = best of the bunch, smarted, but not fantasy enough
LM Studio is easy af. not a coder, just a guy on windows. Also, loo up pinokio.
You can save all your bots and sessions locally
Still not the biggest cuck for Calisi
perchance.org
Type in what you want to happen up top. Then hit continue, and it'll set it up for you. Everything is allowed
Is there a place that collates ST extensions somewhere that Google just refuses to show me?
No idea. I have never seen one. I only stumbled upon GG while googling for something else, and it's the only extension I have.
>Above 16gb use Mistral Small, Gemma 27b it, or just Nemo still
So I'm not retarded for preferring Nemo + a FUCKLOAD of context over Mistral Small and like, 11k context.
also a 16gb chad for reference
I love doing weight gain fetish erotic roleplay with Pokemon chatbots.
based!
context
enable flash attention and quanting kv cache to q8 if you didnt in your backend thats running the model
I use them for long goon sessions and the only usable one I have found that doesn't require me to run locally is Janitor ai. Every other one has either complete dogshit memory, gives me really short responses or both
if you can't run them locally you can use deepseek for free through openrouter at least, it's way better than every website
How's the writing quality of the average model these days? Sure, they might be good for a quick fap but I honestly doubt they can produce an actually compelling story.
thanks anon! I'm learning so much!
Me
(As I enter the secret entrance to the honningbrew meadery, I find a lone guard on a chair staring away in thought. I realize that you have no allies nearby, and I lower my bow, stealthily sneaking behind you and belting your head from behind)
bot
Oof! (Rolls onto the ground unconscious)
(As I tie him up and gag him, I hear a soft whistle. Looking up, I see Weylan signaling me to come over)
Weylan: Well done, we were worried you wouldn't be able to get in here without raising an alarm. (Walks over to unlock the door) Now let's go find that recipe!
(As we enter the dimly lit room filled with massive vats and barrels of mead, I hear a faint humming sound emanating from somewhere within it)
Weylan: This place is even bigger than I imagined. (Takes out a lantern) Hold this for me will you? (Hands me the lantern) We need to find that recipe and get out of here before we're discovered.
(We search through the room, our footsteps echoing loudly on the stone floor. The humming sound grows louder as we approach a large wooden door at the end of the hallway)
Weylan: I think that must be where they keep the recipe. (Places his ear against the door) It sounds like someone's inside... let's go see what we can find out.
(We quietly approach the door, and I set the lantern down on a nearby table. Weylan carefully turns the handle and pushes it open slowly, revealing a small room with a single candle flickering in its center)
....cont.
NeMo-NeMo-12B is my go to right now, it's crazy how well it can randomly create new characters and dialogue
Weylan: Ah, here we are... (Starts searching through a desk filled with papers) The recipe should be somewhere in here...
(As he rummages through the papers, I glance around the room and notice several bottles of mead lined up on a shelf. Curious, I reach out to grab one when suddenly, the door slams shut behind us and we hear a loud click)
Weylan: (Spins around in alarm) What the?! We need to get out of here now! (Starts trying to open the door again)
(The humming sound grows louder, filling the room with an eerie vibration. Weylan continues to struggle with the door while I frantically search for a way out. Finally, I find a small window high up on the wall and start climbing up to it)
Weylan: (Calling up from below) Hurry! I think something's happening down here!
(As I push open the window and climb out onto the ledge outside, Weylan finally manages to unlock the door. He runs out just as the floor beneath him gives way, dropping him into a hidden pit filled with fermented honey)
Weylan: AHHH! (Falls into the sticky substance with a splash)
(I quickly climb down to help him up, but we are both stuck in the goo. We struggle to free ourselves as the humming sound grows even louder, filling our heads with an intense pressure. Suddenly, the door bursts open again and three guards rush into the room)
Guard 1: There they are! (Points at us) Get them!
(The guards charge towards us, swords drawn. We try to fight back but our movements are slowed by the honey. Just as we think all hope is lost, a familiar figure bursts through the door and starts fighting off the guards)
Ryker: Let's go! I'll hold them off while you guys escape!
(Grateful for his help, Weylan and I climb out of the pit and race back to the entrance, leaving Ryker to finish off the guards. As we reach the entrance, we hear a loud crash behind us and turn to see that Ryker has been overwhelmed by the guards. Determined not to let him die in vain, we decide to return and help him)
Weylan: We can't just leave him there! (Starts running back towards the meadery)
(As I follow Weylan, a guard rounds the corner and spots us. He raises his sword to attack when suddenly, a figure dressed in black appears behind him and slits his throat with a dagger. It's Adelaide!)
Adelaide: Quickly! (Pulls me into a dark alley) We need to get out of here before more guards come looking for us.
(We hurry down the alley, keeping our heads low as we make our way back to the inn where Ryker and I are staying. As we enter the room, we find Ryker lying unconscious on the bed)
Weylan: (Rushes over to check on him) He's alive... but he took quite a beating. We need to get him some help.
(Adelaide nods in agreement and starts rummaging through her bag, pulling out a small vial of liquid)
Adelaide: This should help. (Offers the vial to me) Give him a few drops of this. It'll help heal his wounds faster.
(I take the vial and carefully pour a few drops onto Ryker's lips, watching as he starts to stir awake)
Ryker: (Groggily) Where am I? What happened?
Weylan: You were overwhelmed by guards at the meadery. We came back to help you and Adelaide showed up just in time.
(Adelaide smiles faintly before disappearing out of the room, leaving us to tend to Ryker)
Ryker: (Sitting up slowly) That was stupid of me... I should have listened to you and stayed at the inn. But we did find out what they were hiding in that room, didn't we?
(I nod, handing him the vial with the rest of the liquid still inside)
why is deepseek so schizophrenic
R1 is a really schizo model, use V3 instead
also your temperature's probably too high
Weylan: Yes, we found their recipe for honningbrew. Now all we have to do is figure out how to replicate it and bring down Honningbrew once and for all.
(Ryker takes a deep breath, holding onto the vial tightly)
Ryker: We'll find a way. (Gets to his feet slowly, swaying unsteadily from the pain) But first, we need to get some rest and let our wounds heal properly.
(With that, we all lie down on the bed, exhausted from our ordeal. As I drift off to sleep, I can't help but wonder what other dangers lie ahead of us in this quest for justice.)
Yeah. It went way off into its own. This was echidna-13b-v0.3
It tends to keep it more on point, but I loved how it decided to give me an adventure. It's about 30% inclined to writing both sides of this, but normally a reroll brings it back down. In this case, the reroll was a shorter story, but it was still going into storytelling mode.
this will be the next step in AI personas in my books.
All someone needs to do is make a model that can run on almost any machine, phone/tablet/computer, have decent code to make it run good, freedom to make your own AI personality, a good text to speech model and you are rich.
like legit, this will most likely kill vtubers off if it blows up. cause why listen to some random woman act like a retard when you can MAKE your own retarded vtuber
Sorry what are you telling me to do? With Nemo or Small?
I really regret deleting ST to try to stop gooning. Now I just goon to the memories of my bots in my head
github.com
github.com
this works with most models and probably exists in other backends too
meant to post
github.com
as second link
Right, I know about the wiki and looked it up myself, but I'm still confused.
My GGUF's are already quantized, so what exactly is this doing?
Is this supposed to show how good the AIs have gotten? Why waste your time reading stuff of that level when actually good novels exist?
That doesn't exist
just ask an llm
it basically quants the model's context too, allowing double context, although i wouldnt recommend doing it unless you really want more context
flash attention on the other hand is usually free lunch and will slash VRAM requirement of context a ton but will use some more ram
You could run Gemma 3 27b or Mistral Small 3.1 (24b), although you would have to use a smaller quant, say Q3. I wouldn't recommend it unless you're okay with low context sizes.
so is there any way I can offload some of the workload to the ram if I run out of vram?
Yes, this is what gguf does. You offload a portion of the model to RAM. Works good until you go lower than a 80% or so threshold. You'll technically be able to load bigger models but the generation speeds will be really low. I'm talking like 1-2 t/s.
Nah, its perfectly fine to still use Nemo, especially if your chats are longer context. To be honest its probably the best one for RP that's usable on low VRAM cards and I currently use it as my main model for that reason. Mistral Small 3.1 is much smarter, but in my opinion suffers from repetition issues despite proper sampler settings. Gemma 3 genuinely blows me away with everything it does, but has a larger VRAM requirement so I can only fit about 14k context despite having a 24gb card.
I recommend this model to anyone with a decent GPU. I run the Q6_K on my 3090 and it's the best local model I've experienced by far.
22b
it's the best local model I've experienced by far
many such copes
usually its also the biggest model people tried (can try) too
no clue why that retard decided to dump his absolute slop but the writing quality of state of the art models in terms of prose is about on par with your average author, sometimes claude will surprise me with a nice little simile or juxtaposition or whatever. However they are totally unable to craft a larger narrative structure and probably won't be for many many years
with
--usecublas lowvram
option in kobold so that all context will be in ram while you run q4 of the model in vram
I think there's an untapped potential here.
Just imagine a game like Artificial Academy 2 but the characters are AI bots.
why stop there?
why not make an vtuber like AI assistant?
have it talk to you like you're on discord, as it stuff and get info fast?
why not have it something like neuro sama and play games with you?
use ST to crate stories together as she reads it out for you?
you're right, there is untapped potential, but you're taking too small right now.
I keep hearing that the 5060ti is shit for vidya, but is it good for generating images of AI porn?
I got my 5060ti 16gb for $430 MSRP.
Coom. Also, I mostly use NovelAI right now. I am thinking of getting a 5090 so I can do some local gen AI that doesn't suck (I currently have a 10GB RTX 3080 which just isn't up to the task). I have wired up SillyTavern with NAI from time-to-time for a change of pace, but I'm not sure that I actually prefer chatbots over the story-format of NAI.
if you wanna do AI porn just go to civitai dot com
just make it show nsfw stuff and see all the porn videos people have made on it
don't even use porn site anymore because of this since it just has so much good shit
Civitai doesn't have the nsfw I want and I rather run it locally anyways.
Civitai is literally in the midst of doing a purge of a bunch of NSFW stuff. Local is always the way to go and more VRAM means more space to load models or even get into training your own loras which is something I have been doing for 3ish years at this point.
why is deepseek so schizophrenic sometimes, I just swiped and it suddenly hit me with
(actually, {{char}} is an ntr enjoyer, she regularly cheats on {{user}} and...)
what the fuck man, there's nothing like that on the defs or chat history, fuck you
Cheapest GPU for a decent AI setup?
even ai is cucking him
kek
Chatting up a female bot
She calls her own ass a boypussy
1500 bucks minimum
I haven't really kept up with local models in a while, is there a good local multimodal model that can analyze images or grab and translate text inside of images? That shit is the only thing I use ChatGPT for, everything else is local
download a card
start RP
need to fix typos in the intro because creator couldn't be assed to
open bots info to fix it
it's a compete ESL mess
I get second hand embarrassment when I see people doing that shit. English isn't a second or even third language I learned, but god damn can't these people use the LLM or service they make their bots for to correct their obvious mistakes? Or is it some form of esoteric knowledge to rub two braincells together and figure that out?
Gemma 3 12b and 27b have multimodality I think if you use kobold.
cheers anon
I use local and recently made an oni bot that calls me boya
And if you're wondering; jungle.
but is it good for generating images of AI porn?
ai art is very easy on pretty much any card, its not really resource intensive
Isn't VRAM a very important factor when it comes to AI?
Only AI text, Voice and Art is very easy on any card in the past 10 years. Training data for voice would probably also be the second most intensive but not even remotely as big of a deal compared to text.
I was wondering when the NovelAI ad was going to drop. Go back to /aids/, shameless shill.
I want to start an AI slop youtube channel, can someone give me advice? Please
lmao i get your russian currency is worthless but you patrol like 5 fucking boards looking for any mention of Nai
I hate that openrouter lowered the total free messages, at the same time I'm writing a new preset every other day and I do that more than I actually RP, free deepseek has some weird quirks and it feels like you have to struggle with it a lot, changing settings or the entire preset like I have been doing.
I can't believe I came back to this stupid fucking hobby after about a year gone, I hate it more than I love it, but here I am again.
can't believe I came back to this stupid fucking hobby after about a year gone, I hate it more than I love it, but here I am again.
Its addicting cause we see where its going and thats what we want but for now we gotta fiddle
I mean, i dunno if it was much of an ad if im basically saying that im looking at Local genning and new models because it is in a really sucky spot right now with little hope for the future improvement with how Erato turned out. That's like Anti-Shilling anon.
This is just "video games cause violence" all over again. A chatbot isn't likely to make you suicidally depressed, but you're probably pretty likely to spend an unhealthy amount of time talking with one if you're already a depressed sack of shit. The cause and effect is backwards, the mental health issues precede the obsession.
forced sex with AI females
what black magic did you use to get a new card at MSRP?
What model is best if I want to go on an adventure collecting loli wives?
I figure most of them are probably censored
I figure most of them are probably censored
jailbreaks can get you pass any model censorship
Luckshitted on walmart.com at launch. Similarly got a 5070 from bestbuy.com too but I returned it since I failed to scalp it to make my setup cheaper.
Local is the best bet for more depraved stuff, this will anger the Russian poorfag but Novel AI too but it's weaker than a lot of others (and poor memory) but completely uncensored and pretty customizable for characters, DeepSeek API and Grok API could work but you need to constantly update your jailbreaks every few days. I did lots of brown tomboy loli with Grok until they countered my old jailbreak and I was lazy to get a new one.
you got enough vram to make high resolution AI porn and AI videos
STOP DARK ROLEPLAYING
Any recommendations for AI videos with lora support? I want a pov of my favorite character doing cowgirl and bouncing on cock.
Is there any kind of guide on how to get it set up and counter things like hallucinations or forgetfulness? That's what always stopped me from interacting with AI models before.
still using a 1060 6GB in current year
have to just mooch off of whatever free service is available because i literally do not have enough VRAM to do fucking anything text-wise
STILL no new games to justify me upgrading to something better, can't justify upgrading purely for AI coom
Hallucinations and forgetfulness are an innate part of LLMs. You are never getting rid of them, albeit modern models hallucinate way less than in the AID2 days (they still forget shit). You can do cunny with literally any model nowadays, by the way. Even OAI albeit they require the most jailbreaking. "uncensored" isn't even a positive worth talking about anymore.
Never listen to advice from NovelAI shills. Especially from one coming up with the "the models keep changing every day!" lie even though DeepSeek is an open model with multiple providers. They lie to your face to scam you out of money with a fucking Llama 3.0 with 8k context. You'll find most of them in Anon Babble hoping to catch clueless anons.
Silly Tavern has lorebooks and stuff but you have to do that stuff manually for it to be useful, kind of like keeping a diary of the interactions you've had
But even then that's not totally reliable either, AI just isn't there yet
8k context
a good beefy depraved card can be upwards to 2-3k context alone
oh nyo nyo nyooooooooooooooooooo~
AI images I can understand, but I don't get how you fags can get off to text.
try perfect pony XL(civitai.green
or WAI-NSFW-illustrious-SDXL (civitai.green
So am I missing something regarding local llm? Are they really better than gemini? Cause gemini seems pretty good, and with preset it allows cunny
I mean I don't have a gpu or anything so I don't have a choice but it also seems like even local stuff has restrictions too, which is surprising. So between that and that, I'm just wondering if is missing anything with local.
used 3090 600-800$
newfag here, what do those numbers and letters stand for?
you can hook them up to voice and art
Its all really personal taste at this point, models all have their own writing that differs from each other.. The biggest advantage for private llm is just the speed, doing group chats with local is dogshit
the 12b stands for 12 billion parameters
27b is 27 billion parameters
its how much data it has learned from, the bigger the parameter, the more it knows
No, local models aren't better than anything. Well, I guess deepseek is good and it is local in theory, but in practice it is too big.
Even from a financial perspective paying for online models will be cheaper than buying thousands of dollars in hardware (do the math. It would take decades of paying for deepseek for the cost to go anywhere near close to a PC capable of running a 70b). The only cope left for local is "muh privacy."
This shit is actually the true apple visualization test.
The fuck does that even mean?
Do these work with StableDiffusion? Not sure how to generate videos with them.
for me it's wyvern.
i cannot get an erection unless i see a convoluted plot, which might or might not include my own AI little girl who loves me and the day to day struggles we face together
Perchance also lets you ask the AI to edit html code.
Oh boy.
gotta use ComfyUI, the basic stablediffusion UI (Automatic 1111) doesn't have the feature yet
t. faggot that wastes his time stimulating tokiko in lost life instead of just hacking her happiness to max for guaranteed sex
Is there a 'for dumbfucks' guide for this?
Got it, thanks
what are you struggling with anon?
There is like a gazillion youtube videos that give you step by step
pls spoonfeed this poor coomer
whats your temp? turn it down
youtu.be
it wasn't even a good Tavern/Kobold bot
it was a lobotomised low effort char.ai bot that spits out like 3 sentences max as a response (and probably still required some amount of tard wrangling to reach that point)
I use them for both. I recently played a sort of DnD dungeon crawler with a fat-titted rogue companion I hired explicitly to venture into the dungeon AND to play with her tits (it was part of her contract)
Unfortunately, it seems 12B died a dog's death and there's been nothing new in the field for 8gig vramlets like me. I'm stuck with Rocinante and Violet Lotus, because the magical mystery Mag Mell everyone says is amazing performs like shit for me.
Step thinking? And define "big models"
Best model for a 12GB VRAM card?
I heard that 12GB can hanadle up to 13B
I started giving it a try the other day. I mean, I used it regularly but not for rp. This is the first time I'm really giving it a go
Kinda hard to set things up
I dunno. Does anyone have a character card format or something that's really good? What exactly is everyone using for good character cards?
ideally I could just spit chapters or summaries or scripts into it and have the AI make the card for me... if the format is specific enough i assume.
how do you rp in general are you supposed to convey your action though dialouge only
since the chatbot does the narration
or are you supposed to contribute to the narration. I don't know
I'm not too creative it seems
Local is only good for productivity. I want privacy when I'm doing serious work. I don't care about using online models to do text smut because I don't feel is tied to anything else I do online and it's not something that I was going to do with anything but an LLM.
Depends on what you want but I use Rocinante and Mag Mell
ST has some great extensions and can be customized to look nice.
I dunno. Does anyone have a character card format or something that's really good? What exactly is everyone using for good character cards?
ideally I could just spit chapters or summaries or scripts into it and have the AI make the card for me... if the format is specific enough i assume.
You could go to say chub for cards, more importantly is your chat completion preset, which helps the AI structure. Go to the ai thread in Anon Babble and look at the jailbreaks.
how do you rp in general are you supposed to convey your action though dialouge only
The ai can easily pick up what you feed it, you should contribute in order to keep the ai from going schizo or off story. You could use * text goes here * for personal actions, " speaking goes here" for talking or you could just do [ooc: ai instruction go here .] for trying to force the ai to pay attention or do something.
Anything specific about either?
I only know text gen on a surface level.
I'm not too creative it seems
Just play around, the more you use it, the more you will figure it out. Most cards have a pre made scenario you could explore.
The only cope left for local is "muh privacy."
Yeah, ask that FFXIVfag how much of a cope privacy is. Not that I'd do anything that degenerate, but clearly there is some value there. Then again I think he was one of those retards who are basically pirating LLM usage.
BACK IN MY DAY WE HAD AI DUNGEON WITH CRASHING GOOGLE COLLABS AND WE ENJOYED IT
You are a knight in the kingdom of Larion
privacy
cope
good goy
How did you get the character in your background nicely positioned like that? Is that straight txt2image?
qrd?
Whys that significant?
I have a 4070tisu, what's the best local model I can run? At a good generation speed.
XIVfag uses some LLM (which I think he didn't pay for)
Generates Lala porn
Party van arrives
"Yeah, uhh.... I play Final Fantasy XIV..."
Straight to jail
It's extremely funny
Wanna listen to some medieval sex folk while talking to your AI girlfriend?
That's the group expressions extension. You use it with the built-in Character Expressions extension, feed it one image with a transparent background (the catgirl, in the previous example) and then configure group expressions to position it however you like. If you are doing group play, it looks like pic related too.
They're mistral v7 text instruction models, just tweak the temperature numbers and other stuff until you're satisfied
Honestly the google collab spreadsheet is a good place to start, it's meant for a non local setup (JanitorAI) but you can just apply what's said here to your LLM setup
If you end up not liking either of those models there are plenty of others to try it, it took me a few downloads before I settled on those two
colab.research.google.com
That's what he gets for living in some yuropoor nation (TL note: Australians are upside down Europeans)
What are your Mag Mell settings? No matter what I do, it seems mid
Was that an LLM or image gen? The recent lawsuits were over azure image gen. I didn't know there were older cases. I figure you would be more likely to get into real trouble for generating cunny on image as opposed to just text.
*cracks open a fresh Baja Blast*
Yep, it's gamer time. Gotta immerse myself in this fantasy.
I think he lives in Florida.
Image gen.
Cool! Thanks for the tip
I started with Pygmalion 6b on google collab
Honestly I was asking for setting recommendations earlier myself, I can usually get through a coom session before it starts to grate on me but after some recent chats I think I need to experiment more
It definitely has a problem of generating "The state is set, what will happen next?" and "Sorry but I'm uncomfortable" but that's pretty easy to wrangle, especially if you swipe at least once, for some reason the message quality drastically improves on the second attempt, at least for me
slightly
Guess I'll just mess around with it more.
I happened to find a good preset, (along with gemini supposedly being far less restricted than other llms), this fact is really what prompted me to try. My tastes can be on the far end of the unsafe side of the safety spectrum
You can jailbreak any LLM, censorship is a joke.
My tastes can be on the far end of the unsafe side of the safety spectrum
Its AI, you can do whatever the fuck you want. My biggest suggestion is to be verbose in your own writing, because if you write basic as fuck single sentences, the ai will respond in kind. If your detailed, then so will the ai.
The most important part of a card is the starting message, honestly. Make sure it conveys the type of character and story you want, and be 100% positive the starting message does not contain any text describing your actions.
Otherwise, write the card's description using normal language. Don't do w++ or any other retarded shit. Don't use example messages. I usually do like, 1 paragraph for appearance, one for backstory, and a bunch together for personality.
Not good, just interesting.
I write by trade, so I know (I hope) what good writing is. I was just fascinated that my one prompt, with no further backstory, yielded this result. The one with Lamia also did this with different models, and it was surprisingly fun.
I think he lives in Florida.
Then what was he arrested for?
The most important part of a card is the starting message, honestly.
Oh, I thought it was the opposite. I didn't put much effort into that for the couple I created and just focused on the description.
Generating Lala lewds, presumably in a realistic manner. It got flagged by whatever LLM he was using, and the admins contacted the cops.
The things the LLM pays the most attention to are the prefill, jailbreak, and recent chat history. If your starting message sucks, you'll fill history with further shit and your experience will suck. Get a nice starting message in; exactly the type of message you'd like to see from the LLM. The length you want, amount of description, dialogue, character speech style, etc. LLMs are copying machines so it'll keep going with what you did.
That's not what he was arrested for you lying retard
he got arrested because the retard was trying to upload and share things through microsoft onedrive, including real cp
That's kinda worrying. Well no one's ever gotten in trouble for text. I'm probably fine. And I'm not in Florida.
Has anyone managed to hook one of these up to game mechanics like character sheets successfully yet?
Got a link to a news story about his arrest? I couldn't find the bodycam footage of his arrest again.
There are a fair number of people getting in trouble for image gen, but it's all for realistic shit. That's why NovelAI locks its image gen down to generate animu and not realistic images. Like you though, I really only care about text gen. I think if they were going to go after text based degens they would probably start with total fanfiction genocide.
Thought this sounded familiar as youtube was pushing the shit out of the arrest footage a few months ago. It wasn't for generating lala or whatever the fuck. He was genning real life shit and then tried to trade it or whatever through some chat shit like discord, whatsapp, etc. They caught wind of it and alerted the authorities on account it looked like real shit. He's still in hot water over it but too soon to tell since lawmakers are just now trying to see how to handle the entire ai gen shit. Can't find the vid but his name is Phillip McCorkle and literally looks like any other anon. Surprise surprise.
That's not illegal in Florida, so I doubt it.
you don't have to do anything illegal to get arrested
Sillytavern fags, how much info do you write for your persona? In what format? Can I see?
Is it the guy that they came and got him at the movie theatres? And his dad was there? No first they came to his house and took all his stuff and then they came and got him at his job at the movies I think. I saw a video on that. He was sharing ai gen stuff on kik. I don't remember if there was anything real involved. Probably but I don't remember. I think he was one of those idiots that was like yeah I've seen cp a couple time on kik or something, just kept running his mouth to the cops. But either way they arrested him for the AI
Well they did say it was illegal and they arrested him for it.
lollycon's illegal
It has to be that cause I searched the ffxiv lala thing and found nothing regarding anything like that. Sounds like something someone schizoed up.
unironic spanish meme
name: Anon
Persona:
yep, it's chat botting time.
mine is like 4 lines, you dont really have to add much
Name:
Personality:
Appearance:
There done
Can I have a QRD on the guy that keeps seething over Novel AI? He's in literally every AI thread. Dall-E, Bing, Claude, OpenAI, AI Dungeon, everywhere seething about Novel AI.
99% of the time i have no persona. If I feel I need to specify something then I'll have 1 or 2 sentences in there (but if that is specific to the card, you can write it in the card instead desu). In my experience a long detailed persona just makes things worse, eg the character you're playing with knowing things about you that they shouldn't know in universe, or the narration insisting on deviating so it can bring up the traits you wrote down.
Wish I wasn't a habitual phone coomer so I could make extensive use of these, alas.
Self insert
{{user}} is male. {{user}} has an athletic body, black hair and blue eyes.
For actual characters just go with the most relevant bare essentials
User is a superhero with elastic powers and a blue costume and he has a horn on his head
Stuff that you think should come up, basically
If you think in terms of permanent tokens in bots and lower the size to like 200 tokens at most then that's basically a persona
[{{user}} is a 28-year-old Romanian cat girl.]
i dunno about those specific extensions, but shouldn't they work on sillytavern on a phone as well? sillytavern is a web application
Self insert
{{user}} has an athletic body
Oh, they would work, it'd just be behind the chat-pane. At that point it's just slightly too much effort for too little reward, especially since for the bots I like/make, I'd have to go ahead and gen the images myself.
If I wanted fictional characters to rib on my ribs I'd just fall asleep and dream
Besides it's a time saver and avoids the bots hallucinating an even worse body shape
Let me have this
Don't like that they all just do 'yes and,' I want it to throw me some curveballs sometimes, tell me that things which shouldn't work don't, etc.
anybody try Wayfarer-12B? i hear it's good for AI Dungeon styled stuff, but, i wouldn't know what kind of a prompt/"character" to give it since i've only ever used AI for simple masturbatory purposes.
Here's the video where he gets arrested.
youtube.com
At 22:00 he gets asked what games he plays. At 28:20 cop says "lolis are illegal".