« First « Previous Comments 156 - 195 of 239 Next » Last » Search these comments
If you take this idea far enough, one could imagine the slow precipitous slide down the slippery slope of our AI virtua-agent becoming, in effect, a facsimile of…us. You may be skeptical: but there are many ways it can happen in practice. It would start with small conveniences: like having the AI take care of those pesky quotidian tasks—the daily encumbrances like ordering food, booking tickets, handling other financial-administrative obligations. It would follow a slow creep of acceptance, of course. But once the stage of ‘new normal’ is reached, we could find ourselves one step away from a very troubling loss of humanity by virtue of an accumulation of these ‘allowances of convenience’.
What happens when an AI functioning as a surrogate ‘us’ begins to take a greater role in carrying out the basic functions of our daily lives? Recall that humans only serve an essential ‘function’ in today’s corporatocratic society due to our role as liquidity purveyors and maintainers of that all-important financial ‘velocity’. We swirl money around for the corporations, keeping their impenetrably complex system greased and ever generating a frothy top for the techno-finance-kulaks to ‘skim’ like buttermilk. We buy things, then we earn money, and spend it on more things—keeping the entire process “all in the network” of a progressively smaller cartel which makes a killing on the volatile fluctuations, the poisonous rent-seeking games, occult processes of seigniorage and arbitrage. Controlling the digital advertising field, Google funnels us through a hyperloop of a small handful of other megacorps to complete the money dryspin cycle. ...
... That means DARPA is developing human-presenting AI agents to swarm Twitter and other platforms to detect any heterodox anti-narrative speech and immediately begin intelligently “countering” it. One wonders if this hasn’t already been implemented, given some of the interactions now common on these platforms.
Gout - figure out what triggers it and stop eating it.
Creatives fight back with Nightshade, a new software that “poisons” AI models.
Computer scientists at the University of Chicago have developed two free softwares to combat AI scraping.
Their first software was called Glaze, and it works defensively by confusing AI, showing brush strokes and colors to the programs that aren't actually there, effectively disguising the artists' styles.
The second software is an offensive "poison" for AI programs called Nightshade.
Nightshade shows the data scraping programs images that aren't actually there.
"For example, human eyes might see a shaded image of a cow in a green field largely unchanged, but an AI model might see a large leather purse lying in the grass."
For someone using the AI programs, that means that if Nightshade infected images are scraped into the dataset, prompting AI to generate a cow flying in space, it might generate a purse in space instead.
But paradoxically, the sudden appearance of this new technology is also even more mysterious than it seems, since all artificial intelligence-based technology sprouts from a common large-language model that even the developers admit they do not fully understand...
Maybe I’m wrong. But I cannot believe that an invention as significant as artificial intelligence sprang from some serendipitous lab accident. Post It notes — yes. Rubber — yes. Antibiotics — okay. But not artificial intelligence, which requires millions of lines of computer code to operate. Accidentally discovered? No. Impossible.
So then, where did the ‘spark’ of intelligence come from? Is A.I. demonic, a malicious gift whispered into the ear of some luckless scientist who sold their soul for access? Maybe. But my preferred theory is it was dished out of a DARPA skunkworks lab somewhere, for some sinister military purpose. I don’t know. I just find it utterly remarkable that developers say they don’t really understand how AI works — and everybody is just fine with that!
« First « Previous Comments 156 - 195 of 239 Next » Last » Search these comments
I mean sure AI ChatGPT is interesting, but I don't think it's anymore self aware than an Ad Lib Mad Lib book, if anyone remembers those.
https://www.breitbart.com/tech/2023/01/25/analysis-chatgpt-ai-demonstrates-leftist-bias/
Like any trustworthy good buddy, lying to your face about their intentional bias would.