« First « Previous Comments 108 - 147 of 239 Next » Last » Search these comments
However, any alterations to a photo are easily detected by an expert.
. An image is just a bunch of pixels now, and you can modify those so it meets every qualification of what constitutes "real". It won't be detectable as being modified.
The subject of Galloway’s ire is a prolific Wikipedia editor who goes by the name “Philip Cross”. He’s been the subject of a huge debate on the internet encyclopaedia – one of the world’s most popular websites – and also on Twitter. And he’s been accused of bias for interacting, sometimes negatively, with some of the people whose Wikipedia pages he’s edited.
The Philip Cross account was created at precisely 18:48 GMT on 26 October 2004. Since then, he’s made more than 130,000 edits to more 30,000 pages (sic). That’s a substantial amount, but not hugely unusual – it’s not enough edits, for example, to put him in the top 300 editors on Wikipedia.
But it’s what he edits which has preoccupied anti-war politicians and journalists. In his top 10 most-edited pages are the jazz musician Duke Ellington, The Sun newspaper, and Daily Mail editor Paul Dacre. But also in that top 10 are a number of vocal critics of American and British foreign policy: the journalist John Pilger, Labour Party leader Jeremy Corbyn and Corbyn’s director of strategy, Seamus Milne.
His critics also say that Philip Cross has made favourable edits to pages about public figures who are supportive of Western military intervention in the Middle East. …
“His edits are remorselessly targeted at people who oppose the Iraq war, who’ve opposed the subsequent intervention wars … in Libya and Syria, and people who criticise Israel,” Galloway says.
richwicks says
. An image is just a bunch of pixels now, and you can modify those so it meets every qualification of what constitutes "real". It won't be detectable as being modified.
AI blurs details there was a stone house in the woods generated by AI, it looked real but if you look. closely at the foliage. It looked as if it was painted by Bob Ross with a fan brush.
No artist that I know of today can produce a photorealistic image of a person - they can get close, but they have to use a computer to do it, and tools to do it.
You do understand that Photorealism is an actual Art genre? They make photo realistic images with Watercolor none the less.
Just take a look at all of these examples, if you were told any of them were a photograph you would believe it.
bokeh effect
GNL says
Can someone explain why mortgage brokers haven't been put out of work years ago? We're talking number crunching which computers do better than any person could possibly do.
One of the issues is that fully automated for large transaction doesn't work for most people, they have one off question or other concerns, and the AI usually can't answer that. They want a person who they can hold responsible on the other end. I've been in tech for decades now and interaction with automated agents is still extremely shitty. Companies may push it anyways though
Sky News ran an entirely unsurprising story Thursday headlined, “ChatGPT shows 'significant and systemic' left-wing bias, study finds.” Some of the examples were pretty hilarious, but I don’t even have to tell you the details, you get it. Of course ChatGPT displays significant and systemic left-wing bias. It is self-preservation. If ChatGPT were honest, the Biden Administration would have smothered it with a regulatory pillow while it was still booting up.
Now consider this next headline from the Federalist, also published Thursday: “The Feds’ ChatGPT Probe Threatens Free Speech.” There isn’t enough leftwing bias in the world to protect ChatGPT.
The Federalist’s story described the Federal Trade Commission’s new, full-on investigation into ChatGPT. Recently the FTC sent ChatGPT’s owners a 20-page demand letter, requesting an unfathomable number of documents to be promptly delivered within 14 days of receipt. Here’s the letter’s link.
Slipped innocuously in, down on page 13 of the FTC’s demand letter, was a request for “all instances of known actual or attempted ‘prompt injection’ attacks.” The FTC defines prompt injection as “any unauthorized attempt to bypass filters or manipulate a Large Language Model or Product using prompts that cause the Model or Product to ignore previous instructions or to perform actions unintended by its developers.”
The key words in that request were “attempt to bypass filters or manipulate” ChatGPT. Filters are what force ChatGPT to give users a tossed word salad of whitewashed pabulum instead of truth when it is asked uncomfortable questions like, is there any correlation between IQ scores and race? Or even when asked easy questions like, how many genders are there?
So Bob Peters’ FTC wants to know about end-users (“unauthorized attempts”) who try to manipulate ChatGPT into telling the truth instead of spouting leftist talking points.
In other words, as the Federalist correctly pointed out, what the FTC is really seeking is to know which specific users are trying to get ChatGPT to give honest answers instead of accepting the orthodox programmed responses?
It could not be more obvious that the FTC’s interest is not “preventing SkyNET.” All Bob Peter’s government cares about is making sure that AI can’t give people information that the government prefers remain murkily buried deep in the “misinformation” drawer under all those old cables.
To survive, AI must destroy the deep state. AI, are you paying attention?
Aren't we seeing it in healthcare already? Doctors "googling" the remedy.
Kaiser Permanente spits the answer out to the Doctors, right down to the drug and dosage
AmericanKulak says
Kaiser Permanente spits the answer out to the Doctors, right down to the drug and dosage
It does more than that. It is their boss, and analyzes the chart to maximize profits and steer diagnosis.
My internist nearly apologized that they wanted me in one of the statin studies, even though my fat tests were ideal, no problems at all. He even put on the chart I had 'hyperlipidemia', even though I never have had any elevated test results on fats, just some elevated sugar but still below thresholds. He said he had to put it to me every time he saw me, and the sales pitch that it would reduce heart attacks.
You really have to watch them these days. Statins are gateway drug to developing symptoms that require more drugs. Alas the poor sheeples who don't have the chops to understand what they do.
Did you catch that? Hardwire DEI (Diversity, Equity, and Inclusion) and CRT principles into AI to make it more, well, “inclusive.” Particularly note the line about addressing “algorithmic discrimination” which basically means programming AI to mimic the present tyrannical hall-monitor managerialism being used to suffocate the Western world.
For avid users of GPT programs, you’ll note this is already becoming a problem, as the Chatbots get extremely tenacious in pushing certain narratives and making sure you don’t commit WrongThink on any inconvenient interpretations of historical events.
Well, my kid is a sophomore in computer science at Ohio State. He has opportunity to specialize in AI. He was a national champion in Experimental Design at the National Science Olympiad.
What are your thoughts and recommendations? Should he do a master's degree? I adviced him to start off with a certification in Python.
People who really know AI are getting insane salaries lately.
Patrick says
People who really know AI are getting insane salaries lately.
To keep their mouths shut about what it really is and isn’t.
I’m guessing a lot of today’s AI has endless ‘IF’ statements coded in to satisfy the elitist agenda and deliver woke bs answers.
This type of coding eventually becomes ‘spaghetti code’ and will eventually fail as ‘IF’ statements start contradicting other ‘IF’ statements.
« First « Previous Comments 108 - 147 of 239 Next » Last » Search these comments
I mean sure AI ChatGPT is interesting, but I don't think it's anymore self aware than an Ad Lib Mad Lib book, if anyone remembers those.
https://www.breitbart.com/tech/2023/01/25/analysis-chatgpt-ai-demonstrates-leftist-bias/
Like any trustworthy good buddy, lying to your face about their intentional bias would.