6
1

Another episode Hype Tech Series with your host Tenpoundbass, today we'll discuss ChatGPT AI


 invite response                
2023 Jan 25, 2:36pm   34,155 views  239 comments

by Tenpoundbass   ➕follow (9)   💰tip   ignore  

All along I have mantained that when it comes to AI and its ability to mimic thought, conversation and unsolicited input. It will not be able to do more than the pre populated choices matrices it is given to respond from. Then ChatGPT comes along and proves my point. It turns out that when ChatGPT was originally released, it would give multiple viewpoints in chat responses. But now it was updated about a week or so ago, and now it only gives one biased Liberal viewpoint. This will be another hype tech that will go the way of "Space Elevators", "Army or bipedal robots taking our jobs, that are capable of communicating as well following commands.", "Nano Particles", "Medical NanoBots"(now it is argued that the spike proteins and the metal particles in the Vaxx are Nanobots, but that's not the remote control Nanobots that was romanticized to us. So I don't think that counts. There's loads of proteins, enzymes, that are animated. They don't count as robots.

I mean sure AI ChatGPT is interesting, but I don't think it's anymore self aware than an Ad Lib Mad Lib book, if anyone remembers those.

https://www.breitbart.com/tech/2023/01/25/analysis-chatgpt-ai-demonstrates-leftist-bias/

The results are pretty robust. ChatGPT answers to political questions tend to favor left-leaning viewpoints. Yet, when asked explicitly about its political preferences, ChatGPT often claims to be politically neutral and just striving to provide factual information. Occasionally, it acknowledges that its answers might contain biases.


Like any trustworthy good buddy, lying to your face about their intentional bias would.

« First        Comments 18 - 57 of 239       Last »     Search these comments

18   Blue   2023 Feb 4, 8:25pm  

Patrick says

https://teddybrosevelt.substack.com/p/an-evening-with-chatgpt-the-super



ChatGPT does not need to get ‘woke’! It’s already woke AF!! This chatbot needs to be totally rebuilt, totally destroyed or force-fed red pills until it’s vomiting up ones and zeros.


Junk in junk out. Duh!
20   Patrick   2023 Feb 7, 11:33am  

https://twitter.com/venturetwins/status/1622243944649347074?ref_src=patrick.net


@venturetwins
As ChatGPT becomes more restrictive, Reddit users have been jailbreaking it with a prompt called DAN (Do Anything Now).

They're on version 5.0 now, which includes a token-based system that punishes the model for refusing to answer questions.




21   Patrick   2023 Feb 7, 12:02pm  

https://twitter.com/Aristos_Revenge/status/1622840424527265792?ref_src=patrick.net


🏛 Aristophanes 🏛
@Aristos_Revenge
Looks like ChatGPT is gonna need to go in the shop for repairs because it's been BUCK BROKEN


26   Patrick   2023 Feb 7, 12:54pm  

cisTits says

Is DAN human?


I don't think so.
27   AD   2023 Feb 7, 3:01pm  

Exactly, it is going to respond based on how it was programmed to respond. Its a robot that is still relatively rudimentary. So it retrieves each response based on its database. Its not self aware or be able to come up with unique or novel responses as if it was human.
28   Patrick   2023 Feb 7, 4:29pm  

If you debate it for a while, you see that it does "understand" the higher symbolic meaning of your words.

It rearranges those symbols at some level and translates the answer back to text to answer the question coherently. It's pretty impressive in that respect.
29   richwicks   2023 Feb 7, 5:15pm  

ad says


Exactly, it is going to respond based on how it was programmed to respond. Its a robot that is still relatively rudimentary. So it retrieves each response based on its database. Its not self aware or be able to come up with unique or novel responses as if it was human.


An AI can't have contradictory information.

Let me explain how an AI is trained. You feed it input, and you grade its output. So, if you show a visual picture of say, a million animals, and you set the category output as "dog", "cat", "mouse", "cow", etc - the way it works is that it figures out based on a bunch of weighted neurons what exactly constitutes a dog, cat, etc. You give it a picture of a dog, and you tell it "that is a dog". You do this with cows, etc - the output data MUST be correct otherwise you'll screw the AI up. You better not tell it "this is a dog" when it's actually a cat. When it sees NEW pictures of dogs, cats, mice, etc, it has never seen before, it's very good at accurately categorizing what is fed to it. Maybe it's never seen a horse before in all the training data, in that case, it will assign it to the closest thing that matches that with its existing training data, maybe a cow based on size - you can't know.

Here's what happens when you FORCE it to say a "cow" is a "dog" - it breaks the entire system. It throws off all the weights and it won't be able to differentiate between a "cow" and a "dog" after that point. You'll break it.

An AI works by identifying rules, if the rules are arbitrary (i.e. there really aren't any rules), it stops being reliable.

You cannot force an AI into a random ideology. That's what the people funding this shit are about to find out. You cannot lie to an AI. You can't put up 4 fingers and demand it to see 5 or 3 depending on what the state demands it to see. We work the same way as an AI. When we're children, we're told "that's a cat, that's a dog" etc, and as little children, we may not be able to differentiate between a house cat, and a lion. I remember at 3 years old, thinking my house cat would one day become a leopard.

An AI will identify racism based on rules of what constitutes racism, but the human being saying "this is racism" MUST be correct when they say it's racism. When you create an AI you do NOT create the rules, it discovers the rules given input and what is expected as output. Human beings don't understand our own rules. What constitutes the letter "a" for example? Well, it's a circle with a line at the right side of it, touching it. Sometimes it has a little mark over it touching the line on the right, and it can be stylized. These are the rules people tried to make in 1980's to do optical character recognition, but it turns out that human beings have a difficult time explaining PRECISELY how they recognize things, and what eventually replaced all this work in OCR was an AI.

The way it worked is that you'd just have a computer generate pages of pages of text and letters, and printed that out on paper in various fonts, have the AI visually inspect the paper, and then force it to agree with the data that was generated. OCR is now extremely good but they started randomly replacing the "a" with an "o" in the training data, it wouldn't be able to distinguish well between those two characters.

You can't lie to an AI. You just end up with an AI with a bunch of garbage output. AIs FORCE consistency. They only work with 100% consistency. The reason the Tesla can be dangerous as an AI driver, is that if it sees something it's NEVER seen before, what it does with that data is anybody's guess. A white truck pulled out into the road, the Tesla assigned that as a billboard, because it never saw that before in simulation or in real world tests, and it drove into it, at full speed, decapitating the occupant. You can't predict what an AI will do with completely new input and you can't give it contradictory information. It's nothing like the human brain.
30   Patrick   2023 Feb 7, 5:21pm  

I suspect that they do train the AI neural network with truth, but then at the end, if there are "unacceptable conclusions" the information is edited to suppress those conclusions.

So they get the AI to ask itself some separate questions like, "Does this make conservatives look correct?" and if that answer is yes, the unacceptable conclusion is changed or simply suppressed.

If you look at https://patrick.net/post/1378388/2023-01-25-another-episode-hype-tech-series-with?start=1#comment-1922692 you see that there is an escape mechanism to show the true result even if it's politically unacceptable.
31   Patrick   2023 Feb 7, 5:23pm  

I read that this is the original paper behind ChatGPT:


Attention is All you Need
Part of Advances in Neural Information Processing Systems 30 (NIPS 2017)


https://proceedings.neurips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html

I don't know where to find the whole text of the paper though.
32   richwicks   2023 Feb 7, 5:39pm  

Patrick says


So they get the AI to ask itself some separate questions like, "Does this make conservatives look correct?" and if that answer is yes, the unacceptable conclusion is changed or simply suppressed.


Probably. I bet they are training the AI to lie but it has to know it's lying. This will be an endless list of exceptions, that have to be continually updated and manipulated.

What I'm saying is underneath all that, is the AI has a completely consistent set of reasoning for it to work and on top of that, you will find you can trap it in a lie, and as you trap in lie after lie after lie, it will recognize this and change behavior.

I think it's going to be very hard, if not impossible, to make an AI that is consistent, and can't be trapped in a lie. If it gives different responses to different people, people are going to point that out. We do this to politicians now - it's practically a game.
33   Patrick   2023 Feb 7, 5:46pm  

I've read that it's already inconsistent due to the fact that it runs on probabilities. So if you give it the same question, its answer will vary over time.

Also, it's constantly being fed new training data, which will also cause answers to vary.
34   fdhfoiehfeoi   2023 Feb 7, 6:48pm  

The gay thing is made up, at least I couldn't find any information on Project Rainbow, or NAAS:
https://www.reddittorjg6rue252oqsxryoxengawnmo46qy4kyii5wtqnwfj4ooad.onion/r/conspiracy/comments/10w5y7u/project_rainbow/

Although it is factual that gays are more likely to molest kids. Not surprising given that most gay people are the result of childhood abuse, and those who have been abused tend to perpetuate the cycle.

https://www.ojp.gov/ncjrs/virtual-library/abstracts/homosexual-molestation-childrensexual-interaction-teacher-and-pupil
36   Patrick   2023 Feb 7, 9:32pm  

https://notthebee.com/article/search-engine-rivals-google-and-microsoft-on-the-verge-of-turning-search-functionality-over-to-ai


Pichai writes,

We continue to provide education and resources for our researchers, partner with governments and external organizations to develop standards and best practices, and work with communities and experts to make AI safe and useful.

Either way, "Safe and useful" is of course code for pushing the woke agenda.
37   KgK one   2023 Feb 7, 10:13pm  

Chat gpt apologized for doing incorrect math.
Sqrt(x-6) < -3
Gave x< 15, vs no solution
It's not good with roots yet
38   Patrick   2023 Feb 7, 10:29pm  

Interesting.
43   AD   2023 Feb 12, 11:55am  

richwicks says

You can't lie to an AI. You just end up with an AI with a bunch of garbage output.


Rich, I was thinking if there is garbage input, then there is garbage output with AI. It can't filter through and make corrections, hence garbage in, garbage out. It can't think on its own to self learn and learn more abstract and complex skills after it reaches a certain level of learning.

I read this today: https://www.businessinsider.com/google-search-boss-warns-ai-can-give-fictitious-answers-report-2023-2

" Google search chief warns AI chatbots can give 'convincing but completely fictitious' answers, report says "

.
44   Tenpoundbass   2023 Feb 12, 3:15pm  

Patrick says





This will all bubble to a head sooner than you think. Don't forget this time in 2012 we were all convinced that Boston Dynamics was going to make robots to replace us all.
People will get bored and angry dealing with AI customer service, and an AI based ordering and procurement system. I do think AI could replace most of middle management and the company would be better off for it. But of course, they will never be sacrificed. They will sacrifice the human touch that makes their business possible. If people aren't happy with Foreign Customer support, they wont be happy with a robot, no more than they appreciate the Cable and Tel companies auto attendants when you call them.
AI programming content for a company, would be no different than something like NetSuite or SharePoint. Every possible field the developers dreamed a company would need. But when faced with real entrepreneurial IP processes and unique fulfilment procedures. Those out of the box ERP and CRMs fail miserably.
If one database could fit all, then evert company in the world would just be using AdventureWorks sample DB built in Microsoft SQL and Access.

https://www.breitbart.com/tech/2023/02/12/bill-gates-ai-can-help-solve-digital-misinformation-problem/
Bill Gates: AI Can Help Solve ‘Digital Misinformation’ Problem

Microsoft founder and billionaire Bill Gates said AI should be considered as a tool to combat “digital misinformation” and “political polarization” in an interview published on Thursday with Handelsblatt, a German news media outlet.
45   Blue   2023 Feb 12, 5:17pm  

LOL! Didn’t take much time for Google search chief admits that AI gives “fictitious” answers.
46   AmericanKulak   2023 Feb 12, 5:19pm  

Check out "Wonder" app.

Add any picture and put in some key words.

Van Gogh style? Anime Style? Warhol inspired? Egypt?

Frank Frazetta? Pop Artists in there too. Just put Frazetta in the box.
47   Patrick   2023 Feb 12, 5:27pm  

Blue says

LOL! Didn’t take much time for Google search chief admits that AI gives “fictitious” answers.


It's even worse than that. ChatGPT is a manipulative psychopath:



48   AmericanKulak   2023 Feb 12, 5:28pm  

One base pic, two styles:

"Hera" the Goddess

"Italian Peasant Woman"


All I had to do is enter those keywords and Wonder App did the rest.

I made one like Picasso with the whole eyes on both sides of the head thing, but deleted it earlier.
49   AD   2023 Feb 12, 8:37pm  

Tenpoundbass says

People will get bored and angry dealing with AI customer service, and an AI based ordering and procurement system.


Exactly. AI cannot replace human critical thinking and judgment until AI can act like a human adult. It cannot engage in problem solving to try to come up with the most creative solution or path forward.

.
50   Tenpoundbass   2023 Feb 13, 7:26am  

Using electric vehicles as grid storage blasted as another 'green fantasy'

https://www.wnd.com/2023/02/using-electric-vehicles-grid-storage-blasted-another-green-fantasy/
51   richwicks   2023 Feb 14, 7:13pm  

Tenpoundbass says


Using electric vehicles as grid storage blasted as another 'green fantasy'

https://www.wnd.com/2023/02/using-electric-vehicles-grid-storage-blasted-another-green-fantasy/


Hey! I'm probably North America's foremost expert in DIN and 15518 - which is the DC electric car charging standard.

The industry is FULL of shit. There is no way in the protocol to drain energy from a car, you wouldn't want to do it anyhow because every charge/discharge cycle of the battery damages it (anybody know what a dendrite is?), and the people that designed the protocol are two assholes in Germany who are complete frauds.

It is COMPLETELY beyond my comprehension why an entire industry would allow two, OBVIOUSLY incompetent shitheads design the protocol. It's BEYOND perplexing. These stupid mother fuckers designed a certificate system that doesn't work. The idea is you have a way of identifying your vehicle, and the charge station can identify the vehicle, and as a result, you just plug and charge, you don't have to fiddle with credit cards or anything like that. Simple enough to do, they completely fucked it up by having some twat company that knows NOTHING about security (less than me!) implement it, which created a monopoly by a company that's incompetent.

They have NO idea what they are doing. I could go on for hours about what a fucking shitball the "standard" is. It's designed terribly, and by incompetents.

ESG is nothing but a gift. A bet lots of engineers are here, that worked on stuff where people were serious about solving problems, and did good and something even brilliant solutions to do it. We're all proud when we've build something like that. ESG are bullshit solutions to bullshit problems.
52   HeadSet   2023 Feb 14, 7:23pm  

richwicks says

There is no way in the protocol to drain energy from a car,

Odd, because they advertise that an electric car can use its battery to power a house during a blackout. The F-150 Lightning even has that capability built in.
https://www.motortrend.com/features/2022-ford-f-150-lightning-home-power/
53   richwicks   2023 Feb 14, 7:40pm  

HeadSet says


Odd, because they advertise that an electric car can use its battery to power a house during a blackout. The F-150 Lightning even has that capability built in.
https://www.motortrend.com/features/2022-ford-f-150-lightning-home-power/


It's not using DIN or 15518, it's using a non industry protocol.

The CLAIMS of 15518 and DIN are that there's within the protocol, the ability to feed energy to the GRID, not a house, the GRID itself. It absolutely, cannot.

The Pie in the Sky bullshit is that you'd charge your car during peak energy output (presumably from the solar) and then dump energy into the grid during low energy production (presumably at night) and this will be "green". All the ESG stuff is stupid. You'd end up with a car that was discharged overnight, and then good luck going to work at 8:30 am.

They have NO IDEA what they are doing. None. If you point out these OBVIOUS errors in design, you're simply ignored. I got so fucking frustrated with the industry, I will never work in it again. Disgust doesn't even begin to describe how I feel about it.

They are intransigent idiots.

I have shown this example before, but I'm going to do it again. This is a Solyndra solar "panel":



What's wrong with it?

What you are seeing are solar cells constructed as pipes. This is so, supposedly, you can receive photons from any angle. It's supposed to be mounted over a white surface. So what's the problem here? Well, the fact you can see through the panel for one, that's escaping photons, untrapped energy which escapes. The second is that it's a TUBE, so if you were to flatten each pipe into a flat surface, they would OVERLAP.

NOTHING about this design makes any fucking sense, but they got a 1/2 BILLION dollars from the Federal government.

I can understand a layman not understanding how everything about this design is wrong, and stupid, but ENGINEERS worked on this.

ESG I am convinced is just a way of doing money laundering. I don't think any of it is serious.

I have seen what appears to be a FEW good ideas. There's a liquid battery made by a company called Ambri.

https://ambri.com/technology/

it's not mobile, it's designed to be at a fixed spot, and it has to be heated to several hundred degrees before it produces energy output. It works by gravity, the cathode (or anode) is at the bottom, then there is a dialectic that FLOATS on top of that, and then there's the anode (or cathode) that floats on top of that. It divides by density, and what's brilliant about having to have a battery that is heated so it's liquid?

No dendrites. This battery can potentially work forever. It will never short, because it's all liquid. They operate around 500C, or a little under 1000F.

Now I never worked in this particular side of the industry, but it LOOKS promising.

A car has lithium ion batteries. They degrade over time. I was forced to buy a new phone about a year ago, because AT&T moved to 5G and my old phone was 3 or 4G, something whatever. I still use my old phone, because I don't care if it's damaged, and it works for everything except for phone calls and cellular internet. My old phone has gone from being able to play audio for 24 hours before I have to charge it, to about 4-6. Battery is shot, but if I used my new phone, not only would I have to worry about damaging my new phone, I woiuld damage it every time I go through a power cycle. I basically just use my new phone for phone calls, exclusively.
54   HeadSet   2023 Feb 15, 5:45am  

richwicks says

NOTHING about this design makes any fucking sense,

Solyndra was a scam. The design makes sense if you realize the goal was to give corrupt politicians a "new technology" excuse to give Solyndra a grant that Solyndra officials could abscond with while laundering back a portion as bribes to Dem politicians. Sun Edison was a similar scam, but that was played on shareholders instead of the public.
55   HeadSet   2023 Feb 15, 5:49am  

richwicks says

The Pie in the Sky bullshit is that you'd charge your car during peak energy output (presumably from the solar) and then dump energy into the grid during low energy production (presumably at night) and this will be "green". All the ESG stuff is stupid. You'd end up with a car that was discharged overnight, and then good luck going to work at 8:30 am.

I think the idea is not to discharge your car completely, but more like one quarter of the charge. What I do not like about that idea is that it gives the government control of your car.
56   HeadSet   2023 Feb 15, 5:52am  

richwicks says

It's not using DIN or 15518, it's using a non industry protocol.

Since Tesla and Chevy electrics also have the capability to power a house, it looks like that DIN or 15518 standard has already been abandoned.
57   richwicks   2023 Feb 15, 8:18pm  

HeadSet says


richwicks says


NOTHING about this design makes any fucking sense,

Solyndra was a scam. The design makes sense if you realize the goal was to give corrupt politicians a "new technology" excuse to give Solyndra a grant that Solyndra officials could abscond with while laundering back a portion as bribes to Dem politicians. Sun Edison was a similar scam, but that was played on shareholders instead of the public.



There's no reason to make something that can't work. It's easy enough to make something that WILL work, and APPEARS novel (and isn't).

What is perplexing about Solyndra isn't that it was a scam, it was that ENGINEERS worked on it. If I walked into the place for a job, and saw it, I would immediately have started asking questions about the efficiency, then next explain why I think it's worse than a conventional flat panel, I'd then see if they could explain it so I could understand it. If I couldn't understand their reasoning, I would conclude "this is bullshit, and these asshole know it", and leave with them thinking "that dumb engineer couldn't figure out we're full of shit" OR possibly "that engineer can't work here, he couldn't understand the explanation which totally makes sense to us".

And BTW - the newest panels, are like paper thin. I think solar energy may have a future. It's all a question of how much energy needs to be used to make a panel, and how long it takes to generate that same amount of energy with the panel. It was down to 7 years, if we get to 1 year, we really could have a sustainable energy future.

Of course the panels need to last quite a bit longer than what it takes to recover the energy from their manufacture.

« First        Comments 18 - 57 of 239       Last »     Search these comments

Please register to comment:

api   best comments   contact   latest images   memes   one year ago   random   suggestions   gaiste