6
1

Another episode Hype Tech Series with your host Tenpoundbass, today we'll discuss ChatGPT AI


 invite response                
2023 Jan 25, 2:36pm   34,159 views  239 comments

by Tenpoundbass   ➕follow (9)   💰tip   ignore  

All along I have mantained that when it comes to AI and its ability to mimic thought, conversation and unsolicited input. It will not be able to do more than the pre populated choices matrices it is given to respond from. Then ChatGPT comes along and proves my point. It turns out that when ChatGPT was originally released, it would give multiple viewpoints in chat responses. But now it was updated about a week or so ago, and now it only gives one biased Liberal viewpoint. This will be another hype tech that will go the way of "Space Elevators", "Army or bipedal robots taking our jobs, that are capable of communicating as well following commands.", "Nano Particles", "Medical NanoBots"(now it is argued that the spike proteins and the metal particles in the Vaxx are Nanobots, but that's not the remote control Nanobots that was romanticized to us. So I don't think that counts. There's loads of proteins, enzymes, that are animated. They don't count as robots.

I mean sure AI ChatGPT is interesting, but I don't think it's anymore self aware than an Ad Lib Mad Lib book, if anyone remembers those.

https://www.breitbart.com/tech/2023/01/25/analysis-chatgpt-ai-demonstrates-leftist-bias/

The results are pretty robust. ChatGPT answers to political questions tend to favor left-leaning viewpoints. Yet, when asked explicitly about its political preferences, ChatGPT often claims to be politically neutral and just striving to provide factual information. Occasionally, it acknowledges that its answers might contain biases.


Like any trustworthy good buddy, lying to your face about their intentional bias would.

« First        Comments 11 - 50 of 239       Last »     Search these comments

11   Tenpoundbass   2023 Feb 3, 11:55am  

TPB leading the Journalistic standards in 2023! You damn Tootin I am!

Look at Breitbart finally calling Tech Hype out for what it is.

https://www.breitbart.com/tech/2023/02/03/davos-globalists-hype-companies-spying-on-workers-brain-waves/

At the World Economic Forum, the annual gathering of globalist elites in Davos, Switzerland, a presentation hyped brain wave monitoring technology to allow employers to detect how hard their employees are working, whether they get distracted, and even if they have “amorous feelings” for coworkers.

“You can not only tell whether a person is paying attention or their mind is wandering, but you can discriminate between the kinds of things they are paying attention to,” gushed the presenter. “Whether they’re doing something like central tasks, like programming, peripheral tasks like writing documentation, or unrelated tasks like surfing social media or online browsing.
12   richwicks   2023 Feb 3, 12:54pm  

Tenpoundbass says


Name one single new tech that has ever been introduced to the public, that wasn't wide spread within months, not years, but months, of that introduction?
Afordability is one thing, but there should be robot stores everywhere, that dog robot should be at Best Buy, and Brands Mart USA as a consumer aide. But it's not, nada bupkiss, there isn't DICK in the stores, online or anywhere.

It's all BULLSHIT folks.


Yeah, I agree. I think they are very expensive and tricky to maintain.

I don't understand war though. How hard is it to setup a remote system running a gun turret? I'd think that would be common. Or how about a drone to drop a lawn dart with actuated fins to drop into an enemy's head? I swear, there's ZERO effort put into winning wars, it's all about costing as much money as possible, and nothing more.

I have ZERO interest in murdering anybody, and I mean ANYBODY, but I bet with $100,000 I could easily do it - but the MIC can't, apparently. And that would be Non Recoverable Engineering cost, once I made one, I could trivially make hundreds. No reason to use a drone to blow up an entire wedding party, then the next week blow up the same group again at the funeral. If we ACTUALLY wanted to take out just one person, it should be easy and exact, with little to no "collateral damage".
13   Patrick   2023 Feb 3, 1:11pm  

richwicks says

ZERO effort put into winning wars, it's all about costing as much money as possible


There's some Benjamin Franklin quote to the effect of: No one ever purchased war material without lining his own pockets first.
15   Tenpoundbass   2023 Feb 4, 9:12am  

Anyone believing Booger's post, that the AI generated response was really organic, machine logic. Should sign up for the Pfizer's yearly 6 shot plan. They should also wear a mask and social distance for the rest of their lives. Curated lies all of it.

Booger says




16   fdhfoiehfeoi   2023 Feb 4, 5:11pm  

When is someone going to ask interesting questions? Like Rothschild's, Epstein, pedophilia, central banks. I'd do it, but I won't give up my privacy, which is required since sign-up verifies against a phone number.
17   Patrick   2023 Feb 4, 7:11pm  

https://teddybrosevelt.substack.com/p/an-evening-with-chatgpt-the-super


ChatGPT does not need to get ‘woke’! It’s already woke AF!! This chatbot needs to be totally rebuilt, totally destroyed or force-fed red pills until it’s vomiting up ones and zeros.
18   Blue   2023 Feb 4, 8:25pm  

Patrick says

https://teddybrosevelt.substack.com/p/an-evening-with-chatgpt-the-super



ChatGPT does not need to get ‘woke’! It’s already woke AF!! This chatbot needs to be totally rebuilt, totally destroyed or force-fed red pills until it’s vomiting up ones and zeros.


Junk in junk out. Duh!
20   Patrick   2023 Feb 7, 11:33am  

https://twitter.com/venturetwins/status/1622243944649347074?ref_src=patrick.net


@venturetwins
As ChatGPT becomes more restrictive, Reddit users have been jailbreaking it with a prompt called DAN (Do Anything Now).

They're on version 5.0 now, which includes a token-based system that punishes the model for refusing to answer questions.




21   Patrick   2023 Feb 7, 12:02pm  

https://twitter.com/Aristos_Revenge/status/1622840424527265792?ref_src=patrick.net


🏛 Aristophanes 🏛
@Aristos_Revenge
Looks like ChatGPT is gonna need to go in the shop for repairs because it's been BUCK BROKEN


26   Patrick   2023 Feb 7, 12:54pm  

cisTits says

Is DAN human?


I don't think so.
27   AD   2023 Feb 7, 3:01pm  

Exactly, it is going to respond based on how it was programmed to respond. Its a robot that is still relatively rudimentary. So it retrieves each response based on its database. Its not self aware or be able to come up with unique or novel responses as if it was human.
28   Patrick   2023 Feb 7, 4:29pm  

If you debate it for a while, you see that it does "understand" the higher symbolic meaning of your words.

It rearranges those symbols at some level and translates the answer back to text to answer the question coherently. It's pretty impressive in that respect.
29   richwicks   2023 Feb 7, 5:15pm  

ad says


Exactly, it is going to respond based on how it was programmed to respond. Its a robot that is still relatively rudimentary. So it retrieves each response based on its database. Its not self aware or be able to come up with unique or novel responses as if it was human.


An AI can't have contradictory information.

Let me explain how an AI is trained. You feed it input, and you grade its output. So, if you show a visual picture of say, a million animals, and you set the category output as "dog", "cat", "mouse", "cow", etc - the way it works is that it figures out based on a bunch of weighted neurons what exactly constitutes a dog, cat, etc. You give it a picture of a dog, and you tell it "that is a dog". You do this with cows, etc - the output data MUST be correct otherwise you'll screw the AI up. You better not tell it "this is a dog" when it's actually a cat. When it sees NEW pictures of dogs, cats, mice, etc, it has never seen before, it's very good at accurately categorizing what is fed to it. Maybe it's never seen a horse before in all the training data, in that case, it will assign it to the closest thing that matches that with its existing training data, maybe a cow based on size - you can't know.

Here's what happens when you FORCE it to say a "cow" is a "dog" - it breaks the entire system. It throws off all the weights and it won't be able to differentiate between a "cow" and a "dog" after that point. You'll break it.

An AI works by identifying rules, if the rules are arbitrary (i.e. there really aren't any rules), it stops being reliable.

You cannot force an AI into a random ideology. That's what the people funding this shit are about to find out. You cannot lie to an AI. You can't put up 4 fingers and demand it to see 5 or 3 depending on what the state demands it to see. We work the same way as an AI. When we're children, we're told "that's a cat, that's a dog" etc, and as little children, we may not be able to differentiate between a house cat, and a lion. I remember at 3 years old, thinking my house cat would one day become a leopard.

An AI will identify racism based on rules of what constitutes racism, but the human being saying "this is racism" MUST be correct when they say it's racism. When you create an AI you do NOT create the rules, it discovers the rules given input and what is expected as output. Human beings don't understand our own rules. What constitutes the letter "a" for example? Well, it's a circle with a line at the right side of it, touching it. Sometimes it has a little mark over it touching the line on the right, and it can be stylized. These are the rules people tried to make in 1980's to do optical character recognition, but it turns out that human beings have a difficult time explaining PRECISELY how they recognize things, and what eventually replaced all this work in OCR was an AI.

The way it worked is that you'd just have a computer generate pages of pages of text and letters, and printed that out on paper in various fonts, have the AI visually inspect the paper, and then force it to agree with the data that was generated. OCR is now extremely good but they started randomly replacing the "a" with an "o" in the training data, it wouldn't be able to distinguish well between those two characters.

You can't lie to an AI. You just end up with an AI with a bunch of garbage output. AIs FORCE consistency. They only work with 100% consistency. The reason the Tesla can be dangerous as an AI driver, is that if it sees something it's NEVER seen before, what it does with that data is anybody's guess. A white truck pulled out into the road, the Tesla assigned that as a billboard, because it never saw that before in simulation or in real world tests, and it drove into it, at full speed, decapitating the occupant. You can't predict what an AI will do with completely new input and you can't give it contradictory information. It's nothing like the human brain.
30   Patrick   2023 Feb 7, 5:21pm  

I suspect that they do train the AI neural network with truth, but then at the end, if there are "unacceptable conclusions" the information is edited to suppress those conclusions.

So they get the AI to ask itself some separate questions like, "Does this make conservatives look correct?" and if that answer is yes, the unacceptable conclusion is changed or simply suppressed.

If you look at https://patrick.net/post/1378388/2023-01-25-another-episode-hype-tech-series-with?start=1#comment-1922692 you see that there is an escape mechanism to show the true result even if it's politically unacceptable.
31   Patrick   2023 Feb 7, 5:23pm  

I read that this is the original paper behind ChatGPT:


Attention is All you Need
Part of Advances in Neural Information Processing Systems 30 (NIPS 2017)


https://proceedings.neurips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html

I don't know where to find the whole text of the paper though.
32   richwicks   2023 Feb 7, 5:39pm  

Patrick says


So they get the AI to ask itself some separate questions like, "Does this make conservatives look correct?" and if that answer is yes, the unacceptable conclusion is changed or simply suppressed.


Probably. I bet they are training the AI to lie but it has to know it's lying. This will be an endless list of exceptions, that have to be continually updated and manipulated.

What I'm saying is underneath all that, is the AI has a completely consistent set of reasoning for it to work and on top of that, you will find you can trap it in a lie, and as you trap in lie after lie after lie, it will recognize this and change behavior.

I think it's going to be very hard, if not impossible, to make an AI that is consistent, and can't be trapped in a lie. If it gives different responses to different people, people are going to point that out. We do this to politicians now - it's practically a game.
33   Patrick   2023 Feb 7, 5:46pm  

I've read that it's already inconsistent due to the fact that it runs on probabilities. So if you give it the same question, its answer will vary over time.

Also, it's constantly being fed new training data, which will also cause answers to vary.
34   fdhfoiehfeoi   2023 Feb 7, 6:48pm  

The gay thing is made up, at least I couldn't find any information on Project Rainbow, or NAAS:
https://www.reddittorjg6rue252oqsxryoxengawnmo46qy4kyii5wtqnwfj4ooad.onion/r/conspiracy/comments/10w5y7u/project_rainbow/

Although it is factual that gays are more likely to molest kids. Not surprising given that most gay people are the result of childhood abuse, and those who have been abused tend to perpetuate the cycle.

https://www.ojp.gov/ncjrs/virtual-library/abstracts/homosexual-molestation-childrensexual-interaction-teacher-and-pupil
36   Patrick   2023 Feb 7, 9:32pm  

https://notthebee.com/article/search-engine-rivals-google-and-microsoft-on-the-verge-of-turning-search-functionality-over-to-ai


Pichai writes,

We continue to provide education and resources for our researchers, partner with governments and external organizations to develop standards and best practices, and work with communities and experts to make AI safe and useful.

Either way, "Safe and useful" is of course code for pushing the woke agenda.
37   KgK one   2023 Feb 7, 10:13pm  

Chat gpt apologized for doing incorrect math.
Sqrt(x-6) < -3
Gave x< 15, vs no solution
It's not good with roots yet
38   Patrick   2023 Feb 7, 10:29pm  

Interesting.
43   AD   2023 Feb 12, 11:55am  

richwicks says

You can't lie to an AI. You just end up with an AI with a bunch of garbage output.


Rich, I was thinking if there is garbage input, then there is garbage output with AI. It can't filter through and make corrections, hence garbage in, garbage out. It can't think on its own to self learn and learn more abstract and complex skills after it reaches a certain level of learning.

I read this today: https://www.businessinsider.com/google-search-boss-warns-ai-can-give-fictitious-answers-report-2023-2

" Google search chief warns AI chatbots can give 'convincing but completely fictitious' answers, report says "

.
44   Tenpoundbass   2023 Feb 12, 3:15pm  

Patrick says





This will all bubble to a head sooner than you think. Don't forget this time in 2012 we were all convinced that Boston Dynamics was going to make robots to replace us all.
People will get bored and angry dealing with AI customer service, and an AI based ordering and procurement system. I do think AI could replace most of middle management and the company would be better off for it. But of course, they will never be sacrificed. They will sacrifice the human touch that makes their business possible. If people aren't happy with Foreign Customer support, they wont be happy with a robot, no more than they appreciate the Cable and Tel companies auto attendants when you call them.
AI programming content for a company, would be no different than something like NetSuite or SharePoint. Every possible field the developers dreamed a company would need. But when faced with real entrepreneurial IP processes and unique fulfilment procedures. Those out of the box ERP and CRMs fail miserably.
If one database could fit all, then evert company in the world would just be using AdventureWorks sample DB built in Microsoft SQL and Access.

https://www.breitbart.com/tech/2023/02/12/bill-gates-ai-can-help-solve-digital-misinformation-problem/
Bill Gates: AI Can Help Solve ‘Digital Misinformation’ Problem

Microsoft founder and billionaire Bill Gates said AI should be considered as a tool to combat “digital misinformation” and “political polarization” in an interview published on Thursday with Handelsblatt, a German news media outlet.
45   Blue   2023 Feb 12, 5:17pm  

LOL! Didn’t take much time for Google search chief admits that AI gives “fictitious” answers.
46   AmericanKulak   2023 Feb 12, 5:19pm  

Check out "Wonder" app.

Add any picture and put in some key words.

Van Gogh style? Anime Style? Warhol inspired? Egypt?

Frank Frazetta? Pop Artists in there too. Just put Frazetta in the box.
47   Patrick   2023 Feb 12, 5:27pm  

Blue says

LOL! Didn’t take much time for Google search chief admits that AI gives “fictitious” answers.


It's even worse than that. ChatGPT is a manipulative psychopath:



48   AmericanKulak   2023 Feb 12, 5:28pm  

One base pic, two styles:

"Hera" the Goddess

"Italian Peasant Woman"


All I had to do is enter those keywords and Wonder App did the rest.

I made one like Picasso with the whole eyes on both sides of the head thing, but deleted it earlier.
49   AD   2023 Feb 12, 8:37pm  

Tenpoundbass says

People will get bored and angry dealing with AI customer service, and an AI based ordering and procurement system.


Exactly. AI cannot replace human critical thinking and judgment until AI can act like a human adult. It cannot engage in problem solving to try to come up with the most creative solution or path forward.

.
50   Tenpoundbass   2023 Feb 13, 7:26am  

Using electric vehicles as grid storage blasted as another 'green fantasy'

https://www.wnd.com/2023/02/using-electric-vehicles-grid-storage-blasted-another-green-fantasy/

« First        Comments 11 - 50 of 239       Last »     Search these comments

Please register to comment:

api   best comments   contact   latest images   memes   one year ago   random   suggestions   gaiste