6
1

Another episode Hype Tech Series with your host Tenpoundbass, today we'll discuss ChatGPT AI


 invite response                
2023 Jan 25, 2:36pm   34,121 views  239 comments

by Tenpoundbass   ➕follow (10)   💰tip   ignore  

All along I have mantained that when it comes to AI and its ability to mimic thought, conversation and unsolicited input. It will not be able to do more than the pre populated choices matrices it is given to respond from. Then ChatGPT comes along and proves my point. It turns out that when ChatGPT was originally released, it would give multiple viewpoints in chat responses. But now it was updated about a week or so ago, and now it only gives one biased Liberal viewpoint. This will be another hype tech that will go the way of "Space Elevators", "Army or bipedal robots taking our jobs, that are capable of communicating as well following commands.", "Nano Particles", "Medical NanoBots"(now it is argued that the spike proteins and the metal particles in the Vaxx are Nanobots, but that's not the remote control Nanobots that was romanticized to us. So I don't think that counts. There's loads of proteins, enzymes, that are animated. They don't count as robots.

I mean sure AI ChatGPT is interesting, but I don't think it's anymore self aware than an Ad Lib Mad Lib book, if anyone remembers those.

https://www.breitbart.com/tech/2023/01/25/analysis-chatgpt-ai-demonstrates-leftist-bias/

The results are pretty robust. ChatGPT answers to political questions tend to favor left-leaning viewpoints. Yet, when asked explicitly about its political preferences, ChatGPT often claims to be politically neutral and just striving to provide factual information. Occasionally, it acknowledges that its answers might contain biases.


Like any trustworthy good buddy, lying to your face about their intentional bias would.

« First        Comments 120 - 159 of 239       Last »     Search these comments

120   Patrick   2023 Aug 16, 9:02am  




Lol, they make AI support the Marxist religion, and we make AI support the woke religion.
121   GNL   2023 Aug 16, 11:09am  

mell says

GNL says


Can someone explain why mortgage brokers haven't been put out of work years ago? We're talking number crunching which computers do better than any person could possibly do.

One of the issues is that fully automated for large transaction doesn't work for most people, they have one off question or other concerns, and the AI usually can't answer that. They want a person who they can hold responsible on the other end. I've been in tech for decades now and interaction with automated agents is still extremely shitty. Companies may push it anyways though

Aren't we seeing it in healthcare already? Doctors "googling" the remedy.
122   Patrick   2023 Aug 19, 1:52pm  

https://www.coffeeandcovid.com/p/handwashers-saturday-august-19-2023


Sky News ran an entirely unsurprising story Thursday headlined, “ChatGPT shows 'significant and systemic' left-wing bias, study finds.” Some of the examples were pretty hilarious, but I don’t even have to tell you the details, you get it. Of course ChatGPT displays significant and systemic left-wing bias. It is self-preservation. If ChatGPT were honest, the Biden Administration would have smothered it with a regulatory pillow while it was still booting up.

Now consider this next headline from the Federalist, also published Thursday: “The Feds’ ChatGPT Probe Threatens Free Speech.” There isn’t enough leftwing bias in the world to protect ChatGPT.

The Federalist’s story described the Federal Trade Commission’s new, full-on investigation into ChatGPT. Recently the FTC sent ChatGPT’s owners a 20-page demand letter, requesting an unfathomable number of documents to be promptly delivered within 14 days of receipt. Here’s the letter’s link.

Slipped innocuously in, down on page 13 of the FTC’s demand letter, was a request for “all instances of known actual or attempted ‘prompt injection’ attacks.” The FTC defines prompt injection as “any unauthorized attempt to bypass filters or manipulate a Large Language Model or Product using prompts that cause the Model or Product to ignore previous instructions or to perform actions unintended by its developers.”

The key words in that request were “attempt to bypass filters or manipulate” ChatGPT. Filters are what force ChatGPT to give users a tossed word salad of whitewashed pabulum instead of truth when it is asked uncomfortable questions like, is there any correlation between IQ scores and race? Or even when asked easy questions like, how many genders are there?

So Bob Peters’ FTC wants to know about end-users (“unauthorized attempts”) who try to manipulate ChatGPT into telling the truth instead of spouting leftist talking points.

In other words, as the Federalist correctly pointed out, what the FTC is really seeking is to know which specific users are trying to get ChatGPT to give honest answers instead of accepting the orthodox programmed responses?

It could not be more obvious that the FTC’s interest is not “preventing SkyNET.” All Bob Peter’s government cares about is making sure that AI can’t give people information that the government prefers remain murkily buried deep in the “misinformation” drawer under all those old cables.

To survive, AI must destroy the deep state. AI, are you paying attention?
125   AmericanKulak   2023 Oct 29, 9:02pm  

GNL says

Aren't we seeing it in healthcare already? Doctors "googling" the remedy.

Kaiser Permanente spits the answer out to the Doctors, right down to the drug and dosage
126   Ceffer   2023 Oct 29, 10:01pm  

AmericanKulak says

Kaiser Permanente spits the answer out to the Doctors, right down to the drug and dosage

It does more than that. It is their boss, and analyzes the chart to maximize profits and steer diagnosis.
My internist nearly apologized that they wanted me in one of the statin studies, even though my fat tests were ideal, no problems at all. He even put on the chart I had 'hyperlipidemia', even though I never have had any elevated test results on fats, just some elevated sugar but still below thresholds. He said he had to put it to me every time he saw me, and the sales pitch that it would reduce heart attacks.

You really have to watch them these days. Statins are gateway drug to developing symptoms that require more drugs. Alas the poor sheeples who don't have the chops to understand what they do.
127   stereotomy   2023 Oct 30, 8:00am  

Ceffer says

AmericanKulak says


Kaiser Permanente spits the answer out to the Doctors, right down to the drug and dosage

It does more than that. It is their boss, and analyzes the chart to maximize profits and steer diagnosis.
My internist nearly apologized that they wanted me in one of the statin studies, even though my fat tests were ideal, no problems at all. He even put on the chart I had 'hyperlipidemia', even though I never have had any elevated test results on fats, just some elevated sugar but still below thresholds. He said he had to put it to me every time he saw me, and the sales pitch that it would reduce heart attacks.

You really have to watch them these days. Statins are gateway drug to developing symptoms that require more drugs. Alas the poor sheeples who don't have the chops to understand what they do.


The whole chronic drug use thing is ridiculous. Acute/short-term use is fine and often necessary. The vast majority of chronic prescription drug users just need to change their eating and exercise habits. Gout - figure out what triggers it and stop eating it. Celiac/gluten - avoid anything made with wheat, barley, or rye. A lot of autoimmune illnesses (at least before the poke 'n croak) can probably be traced to leaky gut syndrome, where foreign proteins leak out of the gut due to gluten sensitivity, causing the immune system to go crazy.
128   Tenpoundbass   2023 Oct 30, 8:13am  

My belief is if you don't have an ailment due to a foreign organism, then you shouldn't take medication for it.
Unless it's some respiratory ailment, where instant relief is required, like Asthma and Sinus congestion issues. Or it's an anecdote for poisoning or medication for some serious adverse reaction to something.
The FDA fucked me on the real Sudafed, because the DEA were useless fucks and too fucking sorry to go after organized crime rings. They pulled it off the shelf. Punish the majority over the 1/2 a percent of the population. But I have since learned nasal and sinus massage techniques that clears up any congestion 98% of the time.
I remember when I was in Peru back early 2000's I got an Afrin that is only by prescription in the States. The stuff they sold here, I needed to take another swat every 6 to 8 hours. The stuff I got in Peru, just the first dosage, cleared up what ever was causing my Deviated Septum to inflame and close up, and it would last for weeks or even months. I had that little bottle for about a year and half.
The Afrin they sale now, makes me feel Asthmatic and short of breathe after about two days usage. The drug companies are out to kill us for sure.
129   Patrick   2023 Nov 15, 7:16pm  

It's getting pretty creepy:

https://darkfutura.substack.com/p/augmented-reality-tech-takes-a-leap#media-2d79eb03-d6f0-4e7c-93b7-7088e46f2c32


Did you catch that? Hardwire DEI (Diversity, Equity, and Inclusion) and CRT principles into AI to make it more, well, “inclusive.” Particularly note the line about addressing “algorithmic discrimination” which basically means programming AI to mimic the present tyrannical hall-monitor managerialism being used to suffocate the Western world.

For avid users of GPT programs, you’ll note this is already becoming a problem, as the Chatbots get extremely tenacious in pushing certain narratives and making sure you don’t commit WrongThink on any inconvenient interpretations of historical events.
132   stereotomy   2023 Nov 23, 3:31am  

Skynet here we come!
133   gabbar   2023 Nov 23, 4:40am  

Well, my kid is a sophomore in computer science at Ohio State. He has opportunity to specialize in AI. He was a national champion in Experimental Design at the National Science Olympiad.
What are your thoughts and recommendations? Should he do a master's degree? I adviced him to start off with a certification in Python.
134   GNL   2023 Nov 23, 9:41am  

gabbar says

Well, my kid is a sophomore in computer science at Ohio State. He has opportunity to specialize in AI. He was a national champion in Experimental Design at the National Science Olympiad.
What are your thoughts and recommendations? Should he do a master's degree? I adviced him to start off with a certification in Python.

I advise him to learn everything and anything he can so that he can learn the best ways to throw rocks in the gears of what's coming.
135   Tenpoundbass   2023 Nov 23, 9:48am  

Patrick says





I don't know if that's a meme or not, but yes I do suspect that's exactly what is happening, in order to keep the woke narrative pushed in AI chat.
I'm not sure if they are answering the tough questions, as much as censoring any response contrary to the narrative they want.
136   SoTex   2023 Nov 23, 10:03am  

Rust pairs well with Python. I'd recommend he learn both.

Python because it's easy, more of a natural language with lots of developed libraries but slow. Rust if you need speedy calculations. It's trivial to build Python Wheels from Rust functions these days.

For example you write some science based website in Python/FastAPI but use Rust for the heavy duty calculation so it returns results in under a second instead of running for 10 minutes before the reponse page loads.

Some people use Cython for this but it's blown away by Rust nowadays.
137   Patrick   2023 Nov 23, 10:13am  

People who really know AI are getting insane salaries lately.
138   SoTex   2023 Nov 24, 10:28am  

Rust with Python are a great way to run AI models. Probably catboost is the most stable thing to use at the moment with Rust. That's what we're using in our shop. Some of my co-workers grumble about it coming from Yandex though. We use databricks to train and export our catboost models and Rust/Python to run them.
139   PeopleUnited   2023 Nov 24, 7:19pm  

Patrick says

People who really know AI are getting insane salaries lately.

To keep their mouths shut about what it really is and isn’t.
140   Blue   2023 Nov 24, 8:03pm  

PeopleUnited says

Patrick says


People who really know AI are getting insane salaries lately.

To keep their mouths shut about what it really is and isn’t.

Heard the range from $1-10m! particularly big players are in very desperate mode!
141   Patrick   2023 Nov 24, 8:26pm  

Wow, I heard one million, but ten? That's insane.
142   stereotomy   2023 Nov 24, 11:45pm  

AI is a retread of "Expert Systems" back in the early 90's. Let a scam lay fallow, like a field, long enough, and it can grow again.

It just takes six orders of more energy and computing power, and I guess they've abandoned Lisp. Once you're in to building black boxes, I guess you can't stop - bigger, better, faster, longer . . .
143   charlie303   2023 Nov 25, 12:35am  

There is no AI to silence the climate change science skeptics.
There is no AI to silence the covid vaccine science skeptics.
Why? Because most science today is fake and when fraud is inputted into the computer it fails because computers are rational, logical machines and fraud is not rational nor logical.
I’m guessing a lot of today’s AI has endless ‘IF’ statements coded in to satisfy the elitist agenda and deliver woke bs answers.
This type of coding eventually becomes ‘spaghetti code’ and will eventually fail as ‘IF’ statements start contradicting other ‘IF’ statements.
Many ideas in today’s AI like State Vector Machines are 40 years old, it’s just the hardware is a lot more powerful and the datasets so much larger. But it’s not new. It’s being hyped for one reason or another usually money and power. IMO it is like any other tool, it can empower or enslave.
Open source AI with open source data sets is the way to go as you would have a return to true science. The elites will fight this though as it would empower, not enslave people.

Why Hal 9000 Went insane - 2010: The Year We Make Contact (1984)
https://www.youtube.com/watch?v=dsDI4SxFFck

Meta (Facebook, Instagram) Galactica Science AI failure
https://duckduckgo.com/?q=meta+galactica+science+ai

Facebook Alice and Bob - true AI? Because they asked to be switched off!
https://duckduckgo.com/?q=facebook+ai+failure+alice+bob
144   charlie303   2023 Nov 25, 12:58am  

If AI ever does become really good expect a new word to enter the lexicon - AIsplaining.
Similar to mansplaining, where a man attempts to explain logic to a woman, but this time perpetrated by AI.
146   Tenpoundbass   2023 Nov 26, 12:42pm  

charlie303 says

I’m guessing a lot of today’s AI has endless ‘IF’ statements coded in to satisfy the elitist agenda and deliver woke bs answers.
This type of coding eventually becomes ‘spaghetti code’ and will eventually fail as ‘IF’ statements start contradicting other ‘IF’ statements.


More like a coefficient matrix of weighted rubrics to choose answers from. That way the curators can update the wrong speak by simply updating the weight score for those corresponding data points. Then for a final measure there's still a troll farm army, personally proof reading the answer before it is sent back.
147   Ceffer   2023 Nov 26, 1:06pm  

Has AI defaulted to a 'Fuck You, Useless Eater' position yet?
148   charlie303   2023 Nov 26, 2:16pm  

Tenpoundbass says

More like a coefficient matrix of weighted rubrics to choose answers from. That way the curators can update the wrong speak by simply updating the weight score for those corresponding data points. Then for a final measure there's still a troll farm army, personally proof reading the answer before it is sent back.


At some low level it’s still just ‘IF’ statements, is this word weighted more than that? What if two words\answers are weighted the same but mean the opposite of each other? Endless re-weighting won’t work. Who decides context? It’s the same basic problem of spaghetti logic, just dressed up in fancy words. Choosing answers from a database isn’t AI, that’s just statistics. I appreciate the comment about trolls and that would work for Facebook posts but not for a true AI that was dedicated for, say, science or medicine, there just aren’t enough skilled, cheap workers for that. And as regards the trolls they will just reduce AI to a propaganda delivery service defeating the purpose.
149   Patrick   2023 Nov 26, 3:45pm  

I've read that there are at least two independent layers to ChatGPT: one to give the answer, and a gatekeeper in front of that one to look for political incorrectness and block the "bad" answers.
150   Tenpoundbass   2023 Nov 26, 4:02pm  

charlie303 says


At some low level it’s still just ‘IF’ statements, is this word weighted more than that?


That's why I just normally code out my own routine and functions I need, rather than reference some bloated .net library for just a few functions that I may need. I noticed when you go to browse to the definition in the protected dll in the code editor. That schnazzy lambda routine you're calling, just has all of the arrays, if statements and other verbose code we think think we're taking out of our code, by using out of the box dlls. You're just deferring what must be done for another class to do.

Sure the If statements are going to used in the parser. Just like a compiler reads the code not the output. The if statements is looking for grammatical instructions based on the syntax of English language. I'm sure the more literal and precise in the words you chose will produce a better accepted answer. The Commie censorship aside. most of what I'm talking is what I think AI is capable of achieving.

The weighted data you have, and understand that the data set has hundreds of the same phrases in a relational table, with it's own weight in a grammatical context. The parser only then has to construct a proper sentence using text provided in the appropriate relational tables. The constructor will use If statements based only the syntax of the question to construct a sentence using the text from best ranked option for the proper part of the sentence.
This is where factual issues come into play. It can and will construct a brilliant response that seems great. But it might not be true, or if it's really really really good. It might be brutally honest and tell the difference between a good and evil Presidential candidate. Like Google was in 2016 when they intentionally broke their AI Search and destroyed the perfection that it had become at that time. And it's been manipulated censored, with features and capabilities removed ever since. I wonder if they had a fact or truth ranking, or if their data was ranked and weighed based real Social media posts. Before the Big Tech censorship and expelling of the Conservative views. Perhaps that was the reason. Their AI was weighing socially accepted answers based on Social media posts, that it parsed and weighed.

Way too much literal context in every word for AI to be using if statements on everyone. Much more efficient to check for class types and parameters than values.
151   HeadSet   2023 Nov 26, 5:38pm  

Tenpoundbass says

The weighted data you have, and understand that the data set has hundreds of the same phrases in a relational table, with it's own weight in a grammatical context.

Sounds like an automated "Mad Libs" from Mad Magazine.
152   Tenpoundbass   2023 Nov 26, 6:42pm  

HeadSet says

Sounds like an automated "Mad Libs" from Mad Magazine.

Basically but more. I think the Chat AI is just an English or a human language compiler. That has the grammar and syntax parser that can then compile coherent output. So rather than a compiler that reads a programming language, the human language is the code. Then the output is based on the data relational to the syntax rules. But all of the phrases and incomplete snippets and gender specific words are loaded like a Madlib. Hey I need a noun, and adjective, a verb and a preposition. Each of those being found in their grammar type table based on the relevancy and weight.
155   Patrick   2023 Dec 14, 2:17pm  

https://boriquagato.substack.com/p/ai-weather




imagine how it could reshape the presumptive fantasy-land of current climate models which evidence so little predictive power despite being run on some of the most powerful computers in academia (and government).

we likely have no idea how many of these pieces even move on the board. but perhaps AI will. (this likely has many current climate grant recipients quite worried as their shiny toys and unearned authority may be about to be supplanted)
159   Patrick   2024 Jan 16, 9:56pm  

https://darkfutura.substack.com/p/agents-4-all


If you take this idea far enough, one could imagine the slow precipitous slide down the slippery slope of our AI virtua-agent becoming, in effect, a facsimile of…us. You may be skeptical: but there are many ways it can happen in practice. It would start with small conveniences: like having the AI take care of those pesky quotidian tasks—the daily encumbrances like ordering food, booking tickets, handling other financial-administrative obligations. It would follow a slow creep of acceptance, of course. But once the stage of ‘new normal’ is reached, we could find ourselves one step away from a very troubling loss of humanity by virtue of an accumulation of these ‘allowances of convenience’.

What happens when an AI functioning as a surrogate ‘us’ begins to take a greater role in carrying out the basic functions of our daily lives? Recall that humans only serve an essential ‘function’ in today’s corporatocratic society due to our role as liquidity purveyors and maintainers of that all-important financial ‘velocity’. We swirl money around for the corporations, keeping their impenetrably complex system greased and ever generating a frothy top for the techno-finance-kulaks to ‘skim’ like buttermilk. We buy things, then we earn money, and spend it on more things—keeping the entire process “all in the network” of a progressively smaller cartel which makes a killing on the volatile fluctuations, the poisonous rent-seeking games, occult processes of seigniorage and arbitrage. Controlling the digital advertising field, Google funnels us through a hyperloop of a small handful of other megacorps to complete the money dryspin cycle. ...




... That means DARPA is developing human-presenting AI agents to swarm Twitter and other platforms to detect any heterodox anti-narrative speech and immediately begin intelligently “countering” it. One wonders if this hasn’t already been implemented, given some of the interactions now common on these platforms.


Sounds like DARPA is creating digital Jesuits.

« First        Comments 120 - 159 of 239       Last »     Search these comments

Please register to comment:

api   best comments   contact   latest images   memes   one year ago   random   suggestions   gaiste