6
2

Another episode Hype Tech Series with your host Tenpoundbass, today we'll discuss ChatGPT AI


               
2023 Jan 25, 2:36pm   44,631 views  318 comments

by Tenpoundbass   follow (10)  

All along I have mantained that when it comes to AI and its ability to mimic thought, conversation and unsolicited input. It will not be able to do more than the pre populated choices matrices it is given to respond from. Then ChatGPT comes along and proves my point. It turns out that when ChatGPT was originally released, it would give multiple viewpoints in chat responses. But now it was updated about a week or so ago, and now it only gives one biased Liberal viewpoint. This will be another hype tech that will go the way of "Space Elevators", "Army or bipedal robots taking our jobs, that are capable of communicating as well following commands.", "Nano Particles", "Medical NanoBots"(now it is argued that the spike proteins and the metal particles in the Vaxx are Nanobots, but that's not the remote control Nanobots that was romanticized to us. So I don't think that counts. There's loads of proteins, enzymes, that are animated. They don't count as robots.

I mean sure AI ChatGPT is interesting, but I don't think it's anymore self aware than an Ad Lib Mad Lib book, if anyone remembers those.

https://www.breitbart.com/tech/2023/01/25/analysis-chatgpt-ai-demonstrates-leftist-bias/

The results are pretty robust. ChatGPT answers to political questions tend to favor left-leaning viewpoints. Yet, when asked explicitly about its political preferences, ChatGPT often claims to be politically neutral and just striving to provide factual information. Occasionally, it acknowledges that its answers might contain biases.


Like any trustworthy good buddy, lying to your face about their intentional bias would.

« First        Comments 314 - 318 of 318        Search these comments

314   FortWayneHatesRealtors   2025 Nov 13, 1:29pm  

TPB what’s the deal with all AI being so energy spendy? I’m concerned that this will severely raise energy costs nationwide for all of us just so few fellas in big tech can talk to a website.

Can’t they make it energy efficient?
315   MolotovCocktail   2025 Nov 13, 10:19pm  

FortWayneHatesRealtors says


TPB what’s the deal with all AI being so energy spendy? I’m concerned that this will severely raise energy costs nationwide for all of us just so few fellas in big tech can talk to a website.

Can’t they make it energy efficient?


There's a new memerister that actually mimics a neuron cell in operation. It is in the lab phase. It promises to cut energy costs down to single digit percentages of what they are now.

https://spj.science.org/doi/10.34133/research.0758

But yeah. The current GPUs were meant for game consoles. And they generate a lot of heat which require cooling.

But I wouldn't worry. The best option for dispatchable power for AI compute centers are natgas plants or hydroelectric. And there are bottlenecks.

Nuclear takes too long to build even on Chinese schedules.

There is currently a three year backlog for natgas turbines with all three of the world's largest turbine manufacturers -- GE Vernova, Siemens Energy, and Mitsubishi Power -- combined.

Other, non-turbine means of electricity power generation with natgas fuel will be exploited, like solid oxide fuel cells. But those will take time, too.
316   Tenpoundbass   2025 Nov 14, 7:47am  

If they were SMART! Which they AREN'T!
They would be harnessing the heat from the GPUs to generate electricity.

Today's smart asses, just wants to do the upfront cool shit, and don't give a fuck about how it gets there.
317   Patrick   2025 Nov 14, 7:53am  

It's an interesting problem, because computers are just a moderate source of heat. I think it's easier to extract useful work from high heat differentials.
318   Patrick   2025 Dec 9, 11:20am  

https://ground.news/article/studies-llms-sway-political-opinions-more-than-one-way-messaging_284bb8


LLMs Sway Political Opinions More Than One-Way Messaging

On December 4, 2025, a pair of studies published in Nature and Science showed dialogues with large language models can shift people’s political attitudes through controlled chatbot experiments.

Model training and prompting made a crucial difference, as chatbots trained on persuasive conversations and instructed to use facts reproduced partisan patterns, producing asymmetric inaccuracies, psychologist Thomas Costello noted.

Researchers found concrete effect sizes, noting that U.S. participants shifted ratings by two to four points, Canada and Poland participants by about 10 points, with effects 36%–42% durable after a month.

The immediate implication is a trade-off between persuasiveness and accuracy, as study authors found about 19% of chatbot claims were predominantly inaccurate and right-leaning bots made more false claims, warning political campaigns may soon deploy such persuasive but less truthful surrogates.

Given the scope and institutions involved, experts now ask how to detect ideologically weighted models after tests with nearly 77,000 UK participants and 19 LLMs by UK AI Security Institute, Oxford, LSE, MIT, Stanford, and Carnegie Mellon.


This is why AI is so incredibly woke, censored, and locked down.

« First        Comments 314 - 318 of 318        Search these comments

Please register to comment:

api   best comments   contact   latest images   memes   one year ago   users   suggestions   gaiste