« First « Previous Comments 162 - 201 of 254 Next » Last » Search these comments
Creatives fight back with Nightshade, a new software that “poisons” AI models.
Computer scientists at the University of Chicago have developed two free softwares to combat AI scraping.
Their first software was called Glaze, and it works defensively by confusing AI, showing brush strokes and colors to the programs that aren't actually there, effectively disguising the artists' styles.
The second software is an offensive "poison" for AI programs called Nightshade.
Nightshade shows the data scraping programs images that aren't actually there.
"For example, human eyes might see a shaded image of a cow in a green field largely unchanged, but an AI model might see a large leather purse lying in the grass."
For someone using the AI programs, that means that if Nightshade infected images are scraped into the dataset, prompting AI to generate a cow flying in space, it might generate a purse in space instead.
But paradoxically, the sudden appearance of this new technology is also even more mysterious than it seems, since all artificial intelligence-based technology sprouts from a common large-language model that even the developers admit they do not fully understand...
Maybe I’m wrong. But I cannot believe that an invention as significant as artificial intelligence sprang from some serendipitous lab accident. Post It notes — yes. Rubber — yes. Antibiotics — okay. But not artificial intelligence, which requires millions of lines of computer code to operate. Accidentally discovered? No. Impossible.
So then, where did the ‘spark’ of intelligence come from? Is A.I. demonic, a malicious gift whispered into the ear of some luckless scientist who sold their soul for access? Maybe. But my preferred theory is it was dished out of a DARPA skunkworks lab somewhere, for some sinister military purpose. I don’t know. I just find it utterly remarkable that developers say they don’t really understand how AI works — and everybody is just fine with that!
MOUNTAIN VIEW, CA — After fierce backlash to their racist AI image generation tool, executives at Google have paused the release of the software and promised to do a better job of hiding the AI's racism.
"Here at Google, we remain unabashedly committed to racism," said CEO Sundar Pichai. "However, we do admit that our rabid racial animus was maybe too 'in-your-face' for version one of our Gemini AI. We will redouble our efforts to ensure our hateful bigotry is less obvious in future updates so that our anti-human agenda can continue to remake the world in the image of an insufferably woke corporate HR lady, except this time undetected. Thank you."
Google Gemini AI faced criticism this week after producing results that some believe showed a clear bias against anyone white or male. While critics condemned the biased algorithm as "racist," supporters of Gemini disagreed. "Everyone knows it's impossible to show hatred and bigotry towards white males, since everyone knows they're the cause of all the world's problems and not really human anyway," said Jen Gennai, who leads Google's AI Responsibility Initiative. "If you don't believe whiteness should be eradicated in all its forms, you're clearly a racist. I know this because I went to college."
Sources within Google have confirmed their less-obviously racist AI will be ready for release in one month.
At publishing time, Google had still not announced any plans to change its racist search results.
« First « Previous Comments 162 - 201 of 254 Next » Last » Search these comments
I mean sure AI ChatGPT is interesting, but I don't think it's anymore self aware than an Ad Lib Mad Lib book, if anyone remembers those.
https://www.breitbart.com/tech/2023/01/25/analysis-chatgpt-ai-demonstrates-leftist-bias/
Like any trustworthy good buddy, lying to your face about their intentional bias would.