« First « Previous Comments 313 - 322 of 322 Search these comments
Job Title:-LLM Trainer - Agentic Tasks Roles (Multiple Languages)
Location:- Remote
Job Description
Design multi-turn conversations that simulate real interactions between users and AI assistants using apps like calendar, email, maps, and drive.
Emulate both the user and the assistant, including the assistant's tool calls (only when corrections are needed).
Carefully select when and how the assistant uses available tools, ensuring logical flow and proper usage of function calls.
Craft dialogues that demonstrate natural language, intelligent behavior, and contextual understanding across multiple turns.
Generate examples that showcase the assistant’s ability to gracefully complete feasible tasks, recognize infeasible ones, and maintain engaging general chat when tools aren’t required.
Ensure all conversations adhere to defined formatting and quality guidelines, using an internal playbook.
Iterate on conversation examples based on feedback to continuously improve realism, clarity, and value for training purposes.
TPB what’s the deal with all AI being so energy spendy? I’m concerned that this will severely raise energy costs nationwide for all of us just so few fellas in big tech can talk to a website.
Can’t they make it energy efficient?
LLMs Sway Political Opinions More Than One-Way Messaging
On December 4, 2025, a pair of studies published in Nature and Science showed dialogues with large language models can shift people’s political attitudes through controlled chatbot experiments.
Model training and prompting made a crucial difference, as chatbots trained on persuasive conversations and instructed to use facts reproduced partisan patterns, producing asymmetric inaccuracies, psychologist Thomas Costello noted.
Researchers found concrete effect sizes, noting that U.S. participants shifted ratings by two to four points, Canada and Poland participants by about 10 points, with effects 36%–42% durable after a month.
The immediate implication is a trade-off between persuasiveness and accuracy, as study authors found about 19% of chatbot claims were predominantly inaccurate and right-leaning bots made more false claims, warning political campaigns may soon deploy such persuasive but less truthful surrogates.
Given the scope and institutions involved, experts now ask how to detect ideologically weighted models after tests with nearly 77,000 UK participants and 19 LLMs by UK AI Security Institute, Oxford, LSE, MIT, Stanford, and Carnegie Mellon.
If they were SMART! Which they AREN'T!
They would be harnessing the heat from the GPUs to generate electricity.
Today's smart asses, just wants to do the upfront cool shit, and don't give a fuck about how it gets there.
OMG, "Anus GPT"
Didn't notice that before. Their logo does indeed look like an anus.
« First « Previous Comments 313 - 322 of 322 Search these comments
I mean sure AI ChatGPT is interesting, but I don't think it's anymore self aware than an Ad Lib Mad Lib book, if anyone remembers those.
https://www.breitbart.com/tech/2023/01/25/analysis-chatgpt-ai-demonstrates-leftist-bias/
Like any trustworthy good buddy, lying to your face about their intentional bias would.