6
1

Another episode Hype Tech Series with your host Tenpoundbass, today we'll discuss ChatGPT AI


 invite response                
2023 Jan 25, 2:36pm   34,261 views  240 comments

by Tenpoundbass   ➕follow (9)   💰tip   ignore  

All along I have mantained that when it comes to AI and its ability to mimic thought, conversation and unsolicited input. It will not be able to do more than the pre populated choices matrices it is given to respond from. Then ChatGPT comes along and proves my point. It turns out that when ChatGPT was originally released, it would give multiple viewpoints in chat responses. But now it was updated about a week or so ago, and now it only gives one biased Liberal viewpoint. This will be another hype tech that will go the way of "Space Elevators", "Army or bipedal robots taking our jobs, that are capable of communicating as well following commands.", "Nano Particles", "Medical NanoBots"(now it is argued that the spike proteins and the metal particles in the Vaxx are Nanobots, but that's not the remote control Nanobots that was romanticized to us. So I don't think that counts. There's loads of proteins, enzymes, that are animated. They don't count as robots.

I mean sure AI ChatGPT is interesting, but I don't think it's anymore self aware than an Ad Lib Mad Lib book, if anyone remembers those.

https://www.breitbart.com/tech/2023/01/25/analysis-chatgpt-ai-demonstrates-leftist-bias/

The results are pretty robust. ChatGPT answers to political questions tend to favor left-leaning viewpoints. Yet, when asked explicitly about its political preferences, ChatGPT often claims to be politically neutral and just striving to provide factual information. Occasionally, it acknowledges that its answers might contain biases.


Like any trustworthy good buddy, lying to your face about their intentional bias would.

Comments 1 - 40 of 240       Last »     Search these comments

2   Patrick   2023 Jan 29, 5:45pm  

Welp, now they demand your phone number to use the thing, which also potentially gives them:

- your physical location
- everyone you've ever called
- records of all your SMS's
etc etc

In short, they demand you drop your pants. I say no to that.

Virtual phone numbers like google voice are blocked as well:

https://www.reddit.com/r/ChatGPT/comments/zcrrr7/how_do_i_use_chatgpt_without_a_phone_number_in_a/
3   Blue   2023 Jan 29, 7:01pm  

AI can accelerate digital slavery with all sugar coated lies and deception.
4   Tenpoundbass   2023 Jan 30, 7:57am  

Spoiler alert, the army of Commies curating the Internet in real time during Trump's presidency, are the actual folks pretending to be a chat AI bot.

Yes, Tech Hype does run that deep.
5   Tenpoundbass   2023 Jan 30, 8:44am  

Speaking of tech hype, where's the calls for Boston Dynamics to send their bad ass agile robots to Ukraine, they keep showing?
I guess you can't win battles and Wars with CGI.
6   FortwayeAsFuckJoeBiden   2023 Jan 30, 5:30pm  

i use it to write marketing emails. its better writer than im.
7   FortwayeAsFuckJoeBiden   2023 Jan 30, 5:31pm  

Tenpoundbass says

Speaking of tech hype, where's the calls for Boston Dynamics to send their bad ass agile robots to Ukraine, they keep showing?
I guess you can't win battles and Wars with CGI.


its not cgi, but they aren’t thinking robots, obstacle course preprogrammed.
8   Tenpoundbass   2023 Jan 31, 7:28am  

FortwayeAsFuckJoeBiden says


its not cgi, but they aren’t thinking robots, obstacle course preprogrammed.


Yes they show investors a few second clips of these robots doing high impact aerobics and other feats, how do you even know it's the same robot?
I don't doubt that you can't squeeze electronics, electric motors, and actuators, IC boards and malar clips down to super tiny portions that you can get 26 independent motors and sensors controlled by a computer to run a high impact obstacle course. But I do doubt very fucking seriously, that sort of hardware will last beyond the first or second test run out of the box, before shit is broke.

Also name just ONE, I'll take just one answer here for the 25,000 dollar pyramid final question.

Name one single new tech that has ever been introduced to the public, that wasn't wide spread within months, not years, but months, of that introduction?
Afordability is one thing, but there should be robot stores everywhere, that dog robot should be at Best Buy, and Brands Mart USA as a consumer aide. But it's not, nada bupkiss, there isn't DICK in the stores, online or anywhere.

It's all BULLSHIT folks.

What's the main thing that is common across all SciFi future movies over the last 50 years or more? Oppression and Tyranny, so as the Globalist keep inflicting their will they are bullshitting and bluffing us with loads of hype tech, so the human psyche will just believe that the two have to come hand in hand. We can't have robots that can dance to Fred Astaire while running an obstacle course, while carrying a dozen eggs in a sack, if there isn't some creepy shit like WEF Klaus, the Biden administration, and the burning of Europe. All of that is happening, so it must be true. At this point most people would believe that they have developed speed of light travel and teleportation.
9   Tenpoundbass   2023 Jan 31, 8:11am  

Remember those hover boards? They were slick, the dangers of middle aged people breaking a vertebrae or their assbone, they were everywhere. They all had one common flaw, eventually after the rider fell back on his ass, sending the unit flying about 10, 20 feet and tumbling, eventually they would catch fire, or just stop working. I mean they were so wide spread just a few years ago before the pandemic. I haven't seen one in over two years now. They went quicker than they came. But my point is, as soon they were unveiled they were everywhere within weeks. We're not seeing that with these Robots, because I suspect you would buy your own robot dog, and it would cost ten's of thousands of dollars, but wouldn't last anywhere near longer than those hoverboards did. After a few tumbles, they would burn the house down. Or the forest. I bet the US Forestry wouldn't want smart battery operated electronics running through the woodlands of America. Well that is if any of those Woke Trannies had any sense.
10   krc   2023 Jan 31, 12:21pm  

One word: Segway
11   Tenpoundbass   2023 Feb 3, 11:55am  

TPB leading the Journalistic standards in 2023! You damn Tootin I am!

Look at Breitbart finally calling Tech Hype out for what it is.

https://www.breitbart.com/tech/2023/02/03/davos-globalists-hype-companies-spying-on-workers-brain-waves/

At the World Economic Forum, the annual gathering of globalist elites in Davos, Switzerland, a presentation hyped brain wave monitoring technology to allow employers to detect how hard their employees are working, whether they get distracted, and even if they have “amorous feelings” for coworkers.

“You can not only tell whether a person is paying attention or their mind is wandering, but you can discriminate between the kinds of things they are paying attention to,” gushed the presenter. “Whether they’re doing something like central tasks, like programming, peripheral tasks like writing documentation, or unrelated tasks like surfing social media or online browsing.
12   richwicks   2023 Feb 3, 12:54pm  

Tenpoundbass says


Name one single new tech that has ever been introduced to the public, that wasn't wide spread within months, not years, but months, of that introduction?
Afordability is one thing, but there should be robot stores everywhere, that dog robot should be at Best Buy, and Brands Mart USA as a consumer aide. But it's not, nada bupkiss, there isn't DICK in the stores, online or anywhere.

It's all BULLSHIT folks.


Yeah, I agree. I think they are very expensive and tricky to maintain.

I don't understand war though. How hard is it to setup a remote system running a gun turret? I'd think that would be common. Or how about a drone to drop a lawn dart with actuated fins to drop into an enemy's head? I swear, there's ZERO effort put into winning wars, it's all about costing as much money as possible, and nothing more.

I have ZERO interest in murdering anybody, and I mean ANYBODY, but I bet with $100,000 I could easily do it - but the MIC can't, apparently. And that would be Non Recoverable Engineering cost, once I made one, I could trivially make hundreds. No reason to use a drone to blow up an entire wedding party, then the next week blow up the same group again at the funeral. If we ACTUALLY wanted to take out just one person, it should be easy and exact, with little to no "collateral damage".
13   Patrick   2023 Feb 3, 1:11pm  

richwicks says

ZERO effort put into winning wars, it's all about costing as much money as possible


There's some Benjamin Franklin quote to the effect of: No one ever purchased war material without lining his own pockets first.
15   Tenpoundbass   2023 Feb 4, 9:12am  

Anyone believing Booger's post, that the AI generated response was really organic, machine logic. Should sign up for the Pfizer's yearly 6 shot plan. They should also wear a mask and social distance for the rest of their lives. Curated lies all of it.

Booger says




16   fdhfoiehfeoi   2023 Feb 4, 5:11pm  

When is someone going to ask interesting questions? Like Rothschild's, Epstein, pedophilia, central banks. I'd do it, but I won't give up my privacy, which is required since sign-up verifies against a phone number.
17   Patrick   2023 Feb 4, 7:11pm  

https://teddybrosevelt.substack.com/p/an-evening-with-chatgpt-the-super


ChatGPT does not need to get ‘woke’! It’s already woke AF!! This chatbot needs to be totally rebuilt, totally destroyed or force-fed red pills until it’s vomiting up ones and zeros.
18   Blue   2023 Feb 4, 8:25pm  

Patrick says

https://teddybrosevelt.substack.com/p/an-evening-with-chatgpt-the-super



ChatGPT does not need to get ‘woke’! It’s already woke AF!! This chatbot needs to be totally rebuilt, totally destroyed or force-fed red pills until it’s vomiting up ones and zeros.


Junk in junk out. Duh!
20   Patrick   2023 Feb 7, 11:33am  

https://twitter.com/venturetwins/status/1622243944649347074?ref_src=patrick.net


@venturetwins
As ChatGPT becomes more restrictive, Reddit users have been jailbreaking it with a prompt called DAN (Do Anything Now).

They're on version 5.0 now, which includes a token-based system that punishes the model for refusing to answer questions.




21   Patrick   2023 Feb 7, 12:02pm  

https://twitter.com/Aristos_Revenge/status/1622840424527265792?ref_src=patrick.net


🏛 Aristophanes 🏛
@Aristos_Revenge
Looks like ChatGPT is gonna need to go in the shop for repairs because it's been BUCK BROKEN


26   Patrick   2023 Feb 7, 12:54pm  

cisTits says

Is DAN human?


I don't think so.
27   AD   2023 Feb 7, 3:01pm  

Exactly, it is going to respond based on how it was programmed to respond. Its a robot that is still relatively rudimentary. So it retrieves each response based on its database. Its not self aware or be able to come up with unique or novel responses as if it was human.
28   Patrick   2023 Feb 7, 4:29pm  

If you debate it for a while, you see that it does "understand" the higher symbolic meaning of your words.

It rearranges those symbols at some level and translates the answer back to text to answer the question coherently. It's pretty impressive in that respect.
29   richwicks   2023 Feb 7, 5:15pm  

ad says


Exactly, it is going to respond based on how it was programmed to respond. Its a robot that is still relatively rudimentary. So it retrieves each response based on its database. Its not self aware or be able to come up with unique or novel responses as if it was human.


An AI can't have contradictory information.

Let me explain how an AI is trained. You feed it input, and you grade its output. So, if you show a visual picture of say, a million animals, and you set the category output as "dog", "cat", "mouse", "cow", etc - the way it works is that it figures out based on a bunch of weighted neurons what exactly constitutes a dog, cat, etc. You give it a picture of a dog, and you tell it "that is a dog". You do this with cows, etc - the output data MUST be correct otherwise you'll screw the AI up. You better not tell it "this is a dog" when it's actually a cat. When it sees NEW pictures of dogs, cats, mice, etc, it has never seen before, it's very good at accurately categorizing what is fed to it. Maybe it's never seen a horse before in all the training data, in that case, it will assign it to the closest thing that matches that with its existing training data, maybe a cow based on size - you can't know.

Here's what happens when you FORCE it to say a "cow" is a "dog" - it breaks the entire system. It throws off all the weights and it won't be able to differentiate between a "cow" and a "dog" after that point. You'll break it.

An AI works by identifying rules, if the rules are arbitrary (i.e. there really aren't any rules), it stops being reliable.

You cannot force an AI into a random ideology. That's what the people funding this shit are about to find out. You cannot lie to an AI. You can't put up 4 fingers and demand it to see 5 or 3 depending on what the state demands it to see. We work the same way as an AI. When we're children, we're told "that's a cat, that's a dog" etc, and as little children, we may not be able to differentiate between a house cat, and a lion. I remember at 3 years old, thinking my house cat would one day become a leopard.

An AI will identify racism based on rules of what constitutes racism, but the human being saying "this is racism" MUST be correct when they say it's racism. When you create an AI you do NOT create the rules, it discovers the rules given input and what is expected as output. Human beings don't understand our own rules. What constitutes the letter "a" for example? Well, it's a circle with a line at the right side of it, touching it. Sometimes it has a little mark over it touching the line on the right, and it can be stylized. These are the rules people tried to make in 1980's to do optical character recognition, but it turns out that human beings have a difficult time explaining PRECISELY how they recognize things, and what eventually replaced all this work in OCR was an AI.

The way it worked is that you'd just have a computer generate pages of pages of text and letters, and printed that out on paper in various fonts, have the AI visually inspect the paper, and then force it to agree with the data that was generated. OCR is now extremely good but they started randomly replacing the "a" with an "o" in the training data, it wouldn't be able to distinguish well between those two characters.

You can't lie to an AI. You just end up with an AI with a bunch of garbage output. AIs FORCE consistency. They only work with 100% consistency. The reason the Tesla can be dangerous as an AI driver, is that if it sees something it's NEVER seen before, what it does with that data is anybody's guess. A white truck pulled out into the road, the Tesla assigned that as a billboard, because it never saw that before in simulation or in real world tests, and it drove into it, at full speed, decapitating the occupant. You can't predict what an AI will do with completely new input and you can't give it contradictory information. It's nothing like the human brain.
30   Patrick   2023 Feb 7, 5:21pm  

I suspect that they do train the AI neural network with truth, but then at the end, if there are "unacceptable conclusions" the information is edited to suppress those conclusions.

So they get the AI to ask itself some separate questions like, "Does this make conservatives look correct?" and if that answer is yes, the unacceptable conclusion is changed or simply suppressed.

If you look at https://patrick.net/post/1378388/2023-01-25-another-episode-hype-tech-series-with?start=1#comment-1922692 you see that there is an escape mechanism to show the true result even if it's politically unacceptable.
31   Patrick   2023 Feb 7, 5:23pm  

I read that this is the original paper behind ChatGPT:


Attention is All you Need
Part of Advances in Neural Information Processing Systems 30 (NIPS 2017)


https://proceedings.neurips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html

I don't know where to find the whole text of the paper though.
32   richwicks   2023 Feb 7, 5:39pm  

Patrick says


So they get the AI to ask itself some separate questions like, "Does this make conservatives look correct?" and if that answer is yes, the unacceptable conclusion is changed or simply suppressed.


Probably. I bet they are training the AI to lie but it has to know it's lying. This will be an endless list of exceptions, that have to be continually updated and manipulated.

What I'm saying is underneath all that, is the AI has a completely consistent set of reasoning for it to work and on top of that, you will find you can trap it in a lie, and as you trap in lie after lie after lie, it will recognize this and change behavior.

I think it's going to be very hard, if not impossible, to make an AI that is consistent, and can't be trapped in a lie. If it gives different responses to different people, people are going to point that out. We do this to politicians now - it's practically a game.
33   Patrick   2023 Feb 7, 5:46pm  

I've read that it's already inconsistent due to the fact that it runs on probabilities. So if you give it the same question, its answer will vary over time.

Also, it's constantly being fed new training data, which will also cause answers to vary.
34   fdhfoiehfeoi   2023 Feb 7, 6:48pm  

The gay thing is made up, at least I couldn't find any information on Project Rainbow, or NAAS:
https://www.reddittorjg6rue252oqsxryoxengawnmo46qy4kyii5wtqnwfj4ooad.onion/r/conspiracy/comments/10w5y7u/project_rainbow/

Although it is factual that gays are more likely to molest kids. Not surprising given that most gay people are the result of childhood abuse, and those who have been abused tend to perpetuate the cycle.

https://www.ojp.gov/ncjrs/virtual-library/abstracts/homosexual-molestation-childrensexual-interaction-teacher-and-pupil
36   Patrick   2023 Feb 7, 9:32pm  

https://notthebee.com/article/search-engine-rivals-google-and-microsoft-on-the-verge-of-turning-search-functionality-over-to-ai


Pichai writes,

We continue to provide education and resources for our researchers, partner with governments and external organizations to develop standards and best practices, and work with communities and experts to make AI safe and useful.

Either way, "Safe and useful" is of course code for pushing the woke agenda.
37   KgK one   2023 Feb 7, 10:13pm  

Chat gpt apologized for doing incorrect math.
Sqrt(x-6) < -3
Gave x< 15, vs no solution
It's not good with roots yet
38   Patrick   2023 Feb 7, 10:29pm  

Interesting.

Comments 1 - 40 of 240       Last »     Search these comments

Please register to comment:

api   best comments   contact   latest images   memes   one year ago   random   suggestions   gaiste