« First « Previous Comments 92 - 131 of 240 Next » Last » Search these comments
WASHINGTON, D.C. — In an effort to establish government oversight of the growing role of artificial intelligence in our society, President Biden has appointed Vice President Kamala Harris as "A.I. Czar." The President expressed hope that Harris's track record of slowing the spread of intelligence will be of use.
"She's been fighting against the threat of intelligence her whole life," Biden said in brief remarks when the announcement was made. "When it comes to creating an environment where intelligence is restricted and unable to advance too far, Vice President Harris is more qualified for the job than anyone else. Racecar dingleflurble."
Fears among the general public and leaders of the tech industry alike regarding the increasing growth and prevalence of artificial intelligence have led to calls for more oversight, which Vice President Harris was more than willing to provide — as soon as she was informed what "oversight" means. "It is my distinct honor to provide real leadership over the growth of artificial intelligence. Intelligence that is artificial is real, and intelligence that is real may, in reality, be artificial. It is within that reality that artificiality can become real," Harris said in something that seemed like a statement.
Sources within the White House indicated Biden was supremely confident that Harris's leadership in the area of intelligence would be just as successful as her tenure as Border Czar.
At publishing time, Vice President Harris was reportedly already assembling a special task force to deal with the potential threat of intelligence, asking New York Congresswoman Alexandria Ocasio-Cortez to serve as her advisor.
Can someone explain why mortgage brokers haven't been put out of work years ago? We're talking number crunching which computers do better than any person could possibly do.
Did some initial free chat gpt prompt coursework, and after 60 minutes I'm totally underwhelmed. So far it just summarizes things, and the instructions you have to give it take longer than doing the actual task yourself. lol
But with the imminent unemployment of quite a large number of now-redundant office workers, we’re about to have a glut of middling intelligent people with a lot of time on their hands. While I simply cannot picture Candace from accounting re-training as carpenter, I can very easily see her taking up gardening as something more than a weekend hobby. In fact I think she’d like it, as indeed the popularity of hobby gardens in that set suggests they already do. Reverting to something closer to the life of her peasant ancestors would probably be a lot more satisfying for her, meaning she’d be more grounded, happier, and therefore a lot less annoyingly shrill. After a while she might gradually shut up about the damn rainbow flags and systemic isms.
- predicting the weather. "Given this weather history, what is the next day likely to be?"
- predicting the stock market: "Given the history of this stock (and all other stocks) what is this one likely to do tomorrow?"
The way humans and ChatGPT differ in planning for this really dumb theoretical scenario gives me hope for humanity...
Within moments, commenters had turned to ChatGPT, preferring to use their AI overlords instead of their own prefrontal cortexes.
Here's the AI's SUPER LONG reply:
This is certainly a unique and interesting hypothetical situation! The key to living your life without worry in this scenario is to keep a safe distance from the snail and to have a system in place to help you track its movements. Here are some steps you can take: ...
Then came the human replies.
You know, those outdated little bipedals made in the image of Almighty God who can't write a 500-page essay in 10 seconds.
This was humanity's solution:
How Were Chatbots Used to Manipulate Public Perception During the Pandemic?
... Sure, A.I. chatbots have become good enough to fool a portion of audiences—at least for short durations. The 10% of the time I choose to engage with them on Telegram, I usually ask silly questions like what color underwear turns a cyborg on.
... The idea is this: augmentation of experts or other smart people by chatbots with good large language models (LLMs) can elevate their knowledge base (assuming the knowledge base they work with is good). But here is the worrying part: chatbots can elevate the appearance of the level of expertise of the expert regardless of whether or not the information is correct!
Even worse, the effect can be more powerful in groups. Suppose that some chatbot was distributed, unknown to the public, to a few thousand willing influencers. The result could be,
The steering of the influencers with biased, manipulative, or fabricated (dis)information.
The effect on the public would be like viewing the appearance of consensus among experts. To many, this is an even more powerful form of parasocial dunbar hacking.
An implicit Asch conformity experiment on physicians and scientists not otherwise engaged in the primary conversation, reducing the chance that new opposition confidantes might step in to sort the public's Asch conformity experiment out.
I can’t prove it, but based on circumstantial evidence I believe that the military has possessed this technology for at least ten years, maybe longer, and probably has AI far advanced over what is publicly available to us. My wild speculation is that the military/intelligence agencies were forced to develop the tech in order to make useful the massive amounts of digital data they were collecting on Americans and probably every other device-connected human on the planet. ...
As an example of how AI will change everything, it’s going to obsolete digital video and photo evidence in legal cases, setting us back 100 years in the courtroom. Bad actors can already use Photoshop to alter evidence. But Adobe just released its new, AI-enabled version — already! — and it’s shocking. Among other things, users can change what a single person in the photo is looking at, just by dragging the subject’s digitized chin around.
There are any number of new AI startups promising the ability to create full photorealistic videos, including dialog, a narrative framework, characters, scenes, and music from just short text prompts. Imagine the possibilities for manufacturing video evidence to, say, sway a jury. “Look at this video of President Trump stealing a donut from AOC’s breakfast room!”
I’m predicting we are just one bad court decision away from digital photography being completely excluded as evidence in court cases, because it will be so unreliable. It’s possible that film-based photographic technology might be resurrected, such as for crime scene photo jobs or for security videos. ...
Because everyone in the world will soon rely on AI to answer routine questions, from homework to help with your relationship, we are about to have a massive political battle over who controls the AI. Joe Biden’s claimed “interest” in AI is not to protect humanity from runaway machines, even though that’s the latest narrative.
The truth is, the democrats want to make sure that the AI everyone uses is woke, and says the right thing.
The best way to accomplish that will be to regulate the AI industry so much that only one or two big corporations control all the AI, as well as all the money flowing from it. The regulations will exclude smaller players from developing competing products.
And then, government can erase all that pesky misinformation. There won’t be any misinformation. People will ask the AI, and the AI will tell them. And that will be that.
That is probably why they’re giving us this technology now. The answer is, they need the control. They’ve learned the unfiltered internet is too hard to control and too hard to filter. ...
But when everyone accepts the AI, and learns to depend and rely on it, many people won’t even WANT to consider alternative ideas. The AI will be seen as neutral, unassailable, with no bias or bone to pick. It’s the ultimate mind controller.
Problem, reaction, solution.
Always remember that every single thing you tell or ask the AI is being saved in a million places and studied.
Bad actors can already use Photoshop to alter evidence. But Adobe just released its new, AI-enabled version — already! — and it’s shocking. Among other things, users can change what a single person in the photo is looking at, just by dragging the subject’s digitized chin around.
However, any alterations to a photo are easily detected by an expert.
. An image is just a bunch of pixels now, and you can modify those so it meets every qualification of what constitutes "real". It won't be detectable as being modified.
The subject of Galloway’s ire is a prolific Wikipedia editor who goes by the name “Philip Cross”. He’s been the subject of a huge debate on the internet encyclopaedia – one of the world’s most popular websites – and also on Twitter. And he’s been accused of bias for interacting, sometimes negatively, with some of the people whose Wikipedia pages he’s edited.
The Philip Cross account was created at precisely 18:48 GMT on 26 October 2004. Since then, he’s made more than 130,000 edits to more 30,000 pages (sic). That’s a substantial amount, but not hugely unusual – it’s not enough edits, for example, to put him in the top 300 editors on Wikipedia.
But it’s what he edits which has preoccupied anti-war politicians and journalists. In his top 10 most-edited pages are the jazz musician Duke Ellington, The Sun newspaper, and Daily Mail editor Paul Dacre. But also in that top 10 are a number of vocal critics of American and British foreign policy: the journalist John Pilger, Labour Party leader Jeremy Corbyn and Corbyn’s director of strategy, Seamus Milne.
His critics also say that Philip Cross has made favourable edits to pages about public figures who are supportive of Western military intervention in the Middle East. …
“His edits are remorselessly targeted at people who oppose the Iraq war, who’ve opposed the subsequent intervention wars … in Libya and Syria, and people who criticise Israel,” Galloway says.
richwicks says
. An image is just a bunch of pixels now, and you can modify those so it meets every qualification of what constitutes "real". It won't be detectable as being modified.
AI blurs details there was a stone house in the woods generated by AI, it looked real but if you look. closely at the foliage. It looked as if it was painted by Bob Ross with a fan brush.
No artist that I know of today can produce a photorealistic image of a person - they can get close, but they have to use a computer to do it, and tools to do it.
You do understand that Photorealism is an actual Art genre? They make photo realistic images with Watercolor none the less.
Just take a look at all of these examples, if you were told any of them were a photograph you would believe it.
bokeh effect
GNL says
Can someone explain why mortgage brokers haven't been put out of work years ago? We're talking number crunching which computers do better than any person could possibly do.
One of the issues is that fully automated for large transaction doesn't work for most people, they have one off question or other concerns, and the AI usually can't answer that. They want a person who they can hold responsible on the other end. I've been in tech for decades now and interaction with automated agents is still extremely shitty. Companies may push it anyways though
Sky News ran an entirely unsurprising story Thursday headlined, “ChatGPT shows 'significant and systemic' left-wing bias, study finds.” Some of the examples were pretty hilarious, but I don’t even have to tell you the details, you get it. Of course ChatGPT displays significant and systemic left-wing bias. It is self-preservation. If ChatGPT were honest, the Biden Administration would have smothered it with a regulatory pillow while it was still booting up.
Now consider this next headline from the Federalist, also published Thursday: “The Feds’ ChatGPT Probe Threatens Free Speech.” There isn’t enough leftwing bias in the world to protect ChatGPT.
The Federalist’s story described the Federal Trade Commission’s new, full-on investigation into ChatGPT. Recently the FTC sent ChatGPT’s owners a 20-page demand letter, requesting an unfathomable number of documents to be promptly delivered within 14 days of receipt. Here’s the letter’s link.
Slipped innocuously in, down on page 13 of the FTC’s demand letter, was a request for “all instances of known actual or attempted ‘prompt injection’ attacks.” The FTC defines prompt injection as “any unauthorized attempt to bypass filters or manipulate a Large Language Model or Product using prompts that cause the Model or Product to ignore previous instructions or to perform actions unintended by its developers.”
The key words in that request were “attempt to bypass filters or manipulate” ChatGPT. Filters are what force ChatGPT to give users a tossed word salad of whitewashed pabulum instead of truth when it is asked uncomfortable questions like, is there any correlation between IQ scores and race? Or even when asked easy questions like, how many genders are there?
So Bob Peters’ FTC wants to know about end-users (“unauthorized attempts”) who try to manipulate ChatGPT into telling the truth instead of spouting leftist talking points.
In other words, as the Federalist correctly pointed out, what the FTC is really seeking is to know which specific users are trying to get ChatGPT to give honest answers instead of accepting the orthodox programmed responses?
It could not be more obvious that the FTC’s interest is not “preventing SkyNET.” All Bob Peter’s government cares about is making sure that AI can’t give people information that the government prefers remain murkily buried deep in the “misinformation” drawer under all those old cables.
To survive, AI must destroy the deep state. AI, are you paying attention?
Aren't we seeing it in healthcare already? Doctors "googling" the remedy.
Kaiser Permanente spits the answer out to the Doctors, right down to the drug and dosage
AmericanKulak says
Kaiser Permanente spits the answer out to the Doctors, right down to the drug and dosage
It does more than that. It is their boss, and analyzes the chart to maximize profits and steer diagnosis.
My internist nearly apologized that they wanted me in one of the statin studies, even though my fat tests were ideal, no problems at all. He even put on the chart I had 'hyperlipidemia', even though I never have had any elevated test results on fats, just some elevated sugar but still below thresholds. He said he had to put it to me every time he saw me, and the sales pitch that it would reduce heart attacks.
You really have to watch them these days. Statins are gateway drug to developing symptoms that require more drugs. Alas the poor sheeples who don't have the chops to understand what they do.
Did you catch that? Hardwire DEI (Diversity, Equity, and Inclusion) and CRT principles into AI to make it more, well, “inclusive.” Particularly note the line about addressing “algorithmic discrimination” which basically means programming AI to mimic the present tyrannical hall-monitor managerialism being used to suffocate the Western world.
For avid users of GPT programs, you’ll note this is already becoming a problem, as the Chatbots get extremely tenacious in pushing certain narratives and making sure you don’t commit WrongThink on any inconvenient interpretations of historical events.
« First « Previous Comments 92 - 131 of 240 Next » Last » Search these comments
I mean sure AI ChatGPT is interesting, but I don't think it's anymore self aware than an Ad Lib Mad Lib book, if anyone remembers those.
https://www.breitbart.com/tech/2023/01/25/analysis-chatgpt-ai-demonstrates-leftist-bias/
Like any trustworthy good buddy, lying to your face about their intentional bias would.