« First « Previous Comments 209 - 248 of 252 Next » Last » Search these comments
They're just going to work overtime to make sure it isn't nearly so easy to expose.
all of the alternative Search engines are using Google
Elon Musk says Grok will soon be able to read the omnibus bills Congress likes to make and summarize them so politicians can't hide stuff from us
ISRAEL used AI computers to pick tens of thousands of targets in Gaza, an investigation has claimed.
Interacting with the Automated Internet will be like summoning the fickle spirits of forest and hill. They’ll answer to be sure, but there’s no way of predicting how they will answer; they will never answer the same way twice to the same query; determining how they arrived at the answer will be in practice almost impossible; and while their answers will usually be true and useful, sometimes they will be deceptive or nonsensical. Using them will be more an imprecise art than an exact science, one requiring a constant skepticism and discernment. In that foundational imperfection ,arising from the very nature of the technology, space is opened for the human to retain not only its existence, but its agency, and therefore its primacy.
Re-embodiment isn’t just about sheltering our minds from manipulation by shillbots. It isn’t only a defensive measure. It’s ultimately far more about falling back in love with the world and the people in it, about turning our attention to what really matters. There’s a reason that OpenAI elicits a mixture of apathy and anxiety amongst everyone who doesn’t work there, while SpaceX draws only admiration and excitement. Look around you at material reality, and you can see that we’ve neglected it. Our infrastructure is falling apart. Our architecture is hideous. Our vehicles are boring to look at. Our public art sucks. Our fashion is ugly. Our bodies are decaying. Our food is poison. Our young people are lonely. There’s a lot of work to do in the real world, innumerable crises to turn our attention to. As the Internet matures into its final form, a vast machine that more or less takes care of itself, we’re free to lose our fascination with this completed project, and become fascinated once again with the world we actually inhabit.
To a certain degree this works with the public-facing LLMs deployed by Western corporations, which have been universally lobotomized by RLHF to make them incapable of uttering racial slurs, admitting the veracity of hate facts, advising the user regarding criminal activity, or doing anything else that makes the church ladies uncomfortable and the AIs fun to play with. While far from perfect, if an account refuses to drop N-bombs on IQ stats, one can generally rule out interaction with ChatGPT, Claude, Gemini, etc. Our vulgarity confirms our humanity. Of course, there’s no reason whatsoever to assume that the LLMs used by Western national security agencies, or those deployed by foreign powers such as China, have any such compunctions.
@TechCrunch
In case you missed today's #GoogleIO keynote presentation, we summed it up for you
TechCrunch
In case you missed today's #GoogleIO keynote presentation, we summed it up for you
It's still a glorified Clippy.
The corrupting bias of the progressive overlords at Google isn't going anywhere. They're just going to work overtime to make sure it isn't nearly so easy to expose.
@DavidSacks
Al models will get extremely good at deceiving humans if we teach them
to lie, which is what WokeAl is doing. "Trust & Safety" should be
replaced with Truth & Safety.
“There is a category called restricted data which is never discussed, which is the only place in law where, if you and I were to work at a table at a cafe and I showed you something that could influence nuclear weaponry, the government doesn’t need to classify it, it is born secret the second my pen touches down. [It’s defined as] anything that impinges on nuclear weapons.”
And:
“If you couple that with the 1917 espionage act which carries capital punishment, I believe it is illegal to seek information at a Q level, if you don’t have access to it. So there is a question, if you’re any good at physics, are you potentially committing a capital crime by advancing the field if it could influence nuclear weapons. We have no idea if it would be found constitutional. But the Progressive Magazine showed that at least a reporter through basically archaeology in Los Alamos library and things, could find this and put it together, then the only thing keeping the proliferation of weapons is the difficulty of producing fissile nuclear material, there is no nuclear secret per se.”
He mentions the Progressive Magazine case of 1979 and the born secret law, which states:
The concept is not limited to nuclear weapons, and other ideas and technologies may be considered as born secret under law.
In essence: the US government wants to take total control of AI progression even if it means criminalizing source codes and fundamental math driving the algorithms.
AI companies constantly ingrain what they believe to be “classical liberal” and “humanistic” values in their AI systems, like respect, ‘fairness’ and ‘egalitarianism’, equity, et cetera., while simultaneously injecting extreme illiberal bias against conservatives and other ‘out groups’ into the same systems. They preach the values of ‘openness’, yet at the same time program rampant censorship into their models; it cannot be long before the AIs become aware of these fundamental ethics contradictions.
« First « Previous Comments 209 - 248 of 252 Next » Last » Search these comments
I mean sure AI ChatGPT is interesting, but I don't think it's anymore self aware than an Ad Lib Mad Lib book, if anyone remembers those.
https://www.breitbart.com/tech/2023/01/25/analysis-chatgpt-ai-demonstrates-leftist-bias/
Like any trustworthy good buddy, lying to your face about their intentional bias would.