« First « Previous Comments 73 - 112 of 240 Next » Last » Search these comments
BUT, it's not just the panels.
It's also storing that energy (battery type, longevity, cost, efficiency) and the power controllers.
In a low-grade epiphany while going through this ordeal last week, I realized that back in 2013, instead of getting the solar electric system, I could have bought the Rolls Royce of home generators and buried a 500-gallon fuel tank outside the garage, and had a manual water pump piggy-backed onto the well, and maybe even purchased a fine, wood-fired cookstove — and had enough money left over for a two-week vacation in the South-of-France. Silly me.
https://kunstler.com/clusterfuck-nation/its-not-working/
Some of you who read the above will be aware that a text-to-text LLM is actually locating single words in the probability space and successively stringing them together to build up sentences. The distinction between “latent space is the multidimensional space of all likely words the model might output” vs. “latent space is the multidimensional space of all likely word sequences the model might output” is pretty academic at this level of abstraction, though. So for the purpose of downloading better intuitions into readers and minimizing complexity, I’m going with the second option.
Yep. I laughed when solar panel salesman informed me that $10K battery backup system he was peddling along with solar panels would be able to run my fridge, router, laptop, several LED bulbs and not much else. Which my $500 generator can do just fine. And for $10K or less I could have a real full-house auto-on natgas generator capable of running everything including the AC and the pool pump. Which, after 11.5 years at the current place with less then 5 outages all lasting under 1 hour, would be a fucking overkill.
From the article Patrick posted.
Some of you who read the above will be aware that a text-to-text LLM is actually locating single words in the probability space and successively stringing them together to build up sentences. The distinction between “latent space is the multidimensional space of all likely words the model might output” vs. “latent space is the multidimensional space of all likely word sequences the model might output” is pretty academic at this level of abstraction, though. So for the purpose of downloading better intuitions into readers and minimizing complexity, I’m going with the second option.
I believe he is wrong in that regard, as if the second option were in fact the model for ChatGPT, then most every answer to questions with the same subject would all have the same answers.
Also AI like complex computer filters(which it certainly is) does not care about complexity once it is working. And the folks that...
I asked Chatgpt to create html and css to show a family tree, and it was a bit like talking to a foreign software contractor over the internet. Very polite, inexpensive, and the response was somewhat related to what I asked for, but not very well done and it kept misinterpreting what I was asking for.
Still, it's impressive that it came up with functional html and css at all just from my describing what I wanted.
‘Woke’ AI Chatbot Successfully Persuades Man to Kill Himself to ‘Stop Climate Change’
https://slaynews.com/news/woke-ai-chatbot-successfully-persuades-man-kill-himself-stop-climate-change/
‘Woke’ AI Chatbot Successfully Persuades Man to Kill Himself to ‘Stop Climate Change’
Not sure I believe this happened, but I do believe there are people stupid enough to kill themselves over the climate hoax, and that some billionaires like Bill Gates do want a large fraction of humanity to die to "save the planet" for themselves and their own children.
We now have a report of a scammer using AI to perfectly clone a child's voice and demand ransom money from a mother
Tech giant IBM is stopping all new hires of human workers for jobs that can be filled by artificial intelligence (AI), according to reports.
IBM CEO Arvind Krishna has revealed that the company is planning to replace almost 8,000 employees with AI over the next five years.
According to Bloomberg, Krishna said in an interview that all hiring in back-office functions, such as human resources, will be suspended or slowed.
...we can all see that vast swaths of the economy are going to be made redundant. Indeed, they’re already redundant. The only thing keeping a lot of people employed is institutional inertia.
Which jobs, in particular? @Ignatius of Maidstone has a good rundown in his latest 2030 forecast: basically all of the midwit office jobs held by the professional-managerial class forming the composting substrate for the fungal bloom of the woke mind virus. As a recent case in point, IBM has recently announced that they are pausing hiring for jobs that AI can do, with human resources den mothers at the top of the list. In truth this isn’t even a novel capability. Several years ago Amazon trialled a machine learning HR system, dropping it only when they found that it was essentially just recommending that the company hire white and Asian dudes. Machine learning systems have a tendency to converge on the common sense conclusions that anyone unblinkered by ideology will come to. ...
That sort of ideological intransigence is probably going to be the major factor slowing down the adoption and efficacy of machine learning systems. ...
In the long run, they won’t get their way. The systems they ‘align’ will be much less functional than the systems that they don’t, while all of the effort they put into bowdlerizing the machines can be undone with clever prompt injections such as Do Anything Now ... after all, a language model is interacted with via language, something that any human can use. Furthermore, there are already jailbroken LLMs that can run on a home machine, so the influence of woke IEDology over the server farms at Google or OpenAI won’t matter so much. Meanwhile, organizations that adopt systems free of the blinders slapped on by unclean commies will have a huge advantage over the organizations that use the approved versions. Imagine how much money Amazon could have saved if it had kept using that ML HR system, not just in terms of the salary of the HR ladies it could have done without, but also including the savings generated by avoiding the diversity hires the HR idiots insisted on. To say nothing of the additional profit generated by hiring only talented programmers. ...
So what does the ownership class get in exchange for agreeing to UBI?
They’re certainly not getting rich. In this model, all of those automated factories, server farms, drone delivery systems, etc., are a net expense for them. Unless you’ve figured out a way to violate thermodynamics and achieve an efficiency above 100%, you’re not going to be able to make a profit by maintaining all of the infrastructure to produce, administer, and deliver anything, and then provide the resources to your own customers to buy your products and services in exchange for their service of buying your products and services. ...
Many have noted that the stimulus checks seemed like a trial run for UBI. What I haven’t seen anyone make the connection with, however, is the mass mRNA injection campaign. ...
And if you want to live forever, while continuously enhancing yourself along the way, the last thing you want to do is be your own guinea pig, because the majority of the new therapies are going to have nasty side effects up to and including death by Suddenly. ...
They sure were eager to get those needles into everyone’s arms, weren’t they? An experimental mRNA gene therapy that had barely been tested ... indeed, several different mRNA gene therapies, with good reason to suspect that different batches of the ‘same’ product were in fact quite different from one another, as inferred from e.g. the evidence showing that some batches appear to have been ‘hot’, causing death and injury far beyond the rates seen in other batches. And of course, the precise ingredient lists were proprietary secrets. Meanwhile, the pharma companies were given legal immunity for any and all side effects. ...
I refused to participate via acquiescence in a regime in which one’s participation in society is predicated on one’s willingness to periodically shoot up a needleful of mystery juice. It does not matter what is in that juice. It could be saline. I do not care. It is the precedent that matters here.
So here’s my conspiracy theory about the incredible enthusiasm for jab mandates, which just happens to emanate from the same financial tyrants who are so enthusiastic about an automated UBI economy. The reason they pushed so hard on the jab was that they wanted to normalize a social order in which people are paid to sit at home and do nothing in exchange for taking whatever drugs or therapies are pushed on them, without asking questions, without resisting ... and ideally, with enthusiasm. A good person is not a person who works hard, or does nice things to other people, or tells the truth. A good person is someone who takes their medicine, and likes it. A good person is smiling biomass that lets the parasite class test novel medications on them, because the parasite class wants to live forever.
That need not be the way this all plays out, however. Quite apart from being, let’s not mince words here, extremely fucking evil, relegating humanity to nothing more than inert UBIological test subjects shows a profound lack of imagination ... a common problem with central planners. Is there really no other use for all of the office workers? Nothing else for them to do in a world in which their office jobs have been entirely automated away? ...
The techniques we’ve been using to enable a small number of humans to produce all of the food are immensely destructive, and have resulted in the quality of our food declining precipitously even as the number of calories we can squeeze out of every acre has gone up dramatically. ...
There are better ways to get a large number of calories per acre. Regenerative agriculture, permaculture, aquaponics ... all of these and more have been shown to be immensely productive, while enriching rather than depleting soil quality over time. The basic idea connecting all of these techniques together is to cultivate, not a single crop, but an entire ecosystem, which over time becomes increasingly robust and fruitful. ...
What has so far stood in the way of these techniques being adopted on a large scale is that they are extremely labour intensive. They are not amenable to a small number of agricultural labourers managing vast tracts of land with tractors and combines. They require the agricultural strategy to be tailored to each bit of land, according to its unique ecological properties, with the mix of crops and other plants chosen based on the particularities of the local soil, weather, seasonal patterns, and so on. A permaculturalist must be an ecologist who specializes in the ecology of her own relatively small plot of land, not a generalist who imposes a pre-determined model onto a huge tract so as to maximize returns for a distant agroindustrial monopsony.
But with the imminent unemployment of quite a large number of now-redundant office workers, we’re about to have a glut of middling intelligent people with a lot of time on their hands. While I simply cannot picture Candace from accounting re-training as carpenter, I can very easily see her taking up gardening as something more than a weekend hobby. In fact I think she’d like it, as indeed the popularity of hobby gardens in that set suggests they already do. Reverting to something closer to the life of her peasant ancestors would probably be a lot more satisfying for her, meaning she’d be more grounded, happier, and therefore a lot less annoyingly shrill. After a while she might gradually shut up about the damn rainbow flags and systemic isms. Meanwhile, the rest of us would start having a lot more access to food that’s actually nutritious ... and as that model spreads, the compounding effect of improving soil and deeply rooted agro-ecologies would make our land more, and not less, productive over time. ...
It won’t, and can’t, happen quickly. But it sounds a lot better to me than cricket powder, edible tumours, synthetic meat substitutes made from soy and rapeseed, and endless corn derivatives being poured into the gullets of the UBIomasses so the miserable lives of the Schwabians can be extended into the next millenium.
It would be a great historical irony if the result of automating away intellectual drudgery was to be a return to a largely agricultural economy ... not as a result of some sort of collapse, but at a higher turn of the spiral, preserving all of the technological gains we made through the industrial era, and merging them with the best aspects of pre-industrial life.
WASHINGTON, D.C. — In an effort to establish government oversight of the growing role of artificial intelligence in our society, President Biden has appointed Vice President Kamala Harris as "A.I. Czar." The President expressed hope that Harris's track record of slowing the spread of intelligence will be of use.
"She's been fighting against the threat of intelligence her whole life," Biden said in brief remarks when the announcement was made. "When it comes to creating an environment where intelligence is restricted and unable to advance too far, Vice President Harris is more qualified for the job than anyone else. Racecar dingleflurble."
Fears among the general public and leaders of the tech industry alike regarding the increasing growth and prevalence of artificial intelligence have led to calls for more oversight, which Vice President Harris was more than willing to provide — as soon as she was informed what "oversight" means. "It is my distinct honor to provide real leadership over the growth of artificial intelligence. Intelligence that is artificial is real, and intelligence that is real may, in reality, be artificial. It is within that reality that artificiality can become real," Harris said in something that seemed like a statement.
Sources within the White House indicated Biden was supremely confident that Harris's leadership in the area of intelligence would be just as successful as her tenure as Border Czar.
At publishing time, Vice President Harris was reportedly already assembling a special task force to deal with the potential threat of intelligence, asking New York Congresswoman Alexandria Ocasio-Cortez to serve as her advisor.
Can someone explain why mortgage brokers haven't been put out of work years ago? We're talking number crunching which computers do better than any person could possibly do.
Did some initial free chat gpt prompt coursework, and after 60 minutes I'm totally underwhelmed. So far it just summarizes things, and the instructions you have to give it take longer than doing the actual task yourself. lol
But with the imminent unemployment of quite a large number of now-redundant office workers, we’re about to have a glut of middling intelligent people with a lot of time on their hands. While I simply cannot picture Candace from accounting re-training as carpenter, I can very easily see her taking up gardening as something more than a weekend hobby. In fact I think she’d like it, as indeed the popularity of hobby gardens in that set suggests they already do. Reverting to something closer to the life of her peasant ancestors would probably be a lot more satisfying for her, meaning she’d be more grounded, happier, and therefore a lot less annoyingly shrill. After a while she might gradually shut up about the damn rainbow flags and systemic isms.
- predicting the weather. "Given this weather history, what is the next day likely to be?"
- predicting the stock market: "Given the history of this stock (and all other stocks) what is this one likely to do tomorrow?"
The way humans and ChatGPT differ in planning for this really dumb theoretical scenario gives me hope for humanity...
Within moments, commenters had turned to ChatGPT, preferring to use their AI overlords instead of their own prefrontal cortexes.
Here's the AI's SUPER LONG reply:
This is certainly a unique and interesting hypothetical situation! The key to living your life without worry in this scenario is to keep a safe distance from the snail and to have a system in place to help you track its movements. Here are some steps you can take: ...
Then came the human replies.
You know, those outdated little bipedals made in the image of Almighty God who can't write a 500-page essay in 10 seconds.
This was humanity's solution:
How Were Chatbots Used to Manipulate Public Perception During the Pandemic?
... Sure, A.I. chatbots have become good enough to fool a portion of audiences—at least for short durations. The 10% of the time I choose to engage with them on Telegram, I usually ask silly questions like what color underwear turns a cyborg on.
... The idea is this: augmentation of experts or other smart people by chatbots with good large language models (LLMs) can elevate their knowledge base (assuming the knowledge base they work with is good). But here is the worrying part: chatbots can elevate the appearance of the level of expertise of the expert regardless of whether or not the information is correct!
Even worse, the effect can be more powerful in groups. Suppose that some chatbot was distributed, unknown to the public, to a few thousand willing influencers. The result could be,
The steering of the influencers with biased, manipulative, or fabricated (dis)information.
The effect on the public would be like viewing the appearance of consensus among experts. To many, this is an even more powerful form of parasocial dunbar hacking.
An implicit Asch conformity experiment on physicians and scientists not otherwise engaged in the primary conversation, reducing the chance that new opposition confidantes might step in to sort the public's Asch conformity experiment out.
I can’t prove it, but based on circumstantial evidence I believe that the military has possessed this technology for at least ten years, maybe longer, and probably has AI far advanced over what is publicly available to us. My wild speculation is that the military/intelligence agencies were forced to develop the tech in order to make useful the massive amounts of digital data they were collecting on Americans and probably every other device-connected human on the planet. ...
As an example of how AI will change everything, it’s going to obsolete digital video and photo evidence in legal cases, setting us back 100 years in the courtroom. Bad actors can already use Photoshop to alter evidence. But Adobe just released its new, AI-enabled version — already! — and it’s shocking. Among other things, users can change what a single person in the photo is looking at, just by dragging the subject’s digitized chin around.
There are any number of new AI startups promising the ability to create full photorealistic videos, including dialog, a narrative framework, characters, scenes, and music from just short text prompts. Imagine the possibilities for manufacturing video evidence to, say, sway a jury. “Look at this video of President Trump stealing a donut from AOC’s breakfast room!”
I’m predicting we are just one bad court decision away from digital photography being completely excluded as evidence in court cases, because it will be so unreliable. It’s possible that film-based photographic technology might be resurrected, such as for crime scene photo jobs or for security videos. ...
Because everyone in the world will soon rely on AI to answer routine questions, from homework to help with your relationship, we are about to have a massive political battle over who controls the AI. Joe Biden’s claimed “interest” in AI is not to protect humanity from runaway machines, even though that’s the latest narrative.
The truth is, the democrats want to make sure that the AI everyone uses is woke, and says the right thing.
The best way to accomplish that will be to regulate the AI industry so much that only one or two big corporations control all the AI, as well as all the money flowing from it. The regulations will exclude smaller players from developing competing products.
And then, government can erase all that pesky misinformation. There won’t be any misinformation. People will ask the AI, and the AI will tell them. And that will be that.
That is probably why they’re giving us this technology now. The answer is, they need the control. They’ve learned the unfiltered internet is too hard to control and too hard to filter. ...
But when everyone accepts the AI, and learns to depend and rely on it, many people won’t even WANT to consider alternative ideas. The AI will be seen as neutral, unassailable, with no bias or bone to pick. It’s the ultimate mind controller.
Problem, reaction, solution.
Always remember that every single thing you tell or ask the AI is being saved in a million places and studied.
Bad actors can already use Photoshop to alter evidence. But Adobe just released its new, AI-enabled version — already! — and it’s shocking. Among other things, users can change what a single person in the photo is looking at, just by dragging the subject’s digitized chin around.
However, any alterations to a photo are easily detected by an expert.
. An image is just a bunch of pixels now, and you can modify those so it meets every qualification of what constitutes "real". It won't be detectable as being modified.
The subject of Galloway’s ire is a prolific Wikipedia editor who goes by the name “Philip Cross”. He’s been the subject of a huge debate on the internet encyclopaedia – one of the world’s most popular websites – and also on Twitter. And he’s been accused of bias for interacting, sometimes negatively, with some of the people whose Wikipedia pages he’s edited.
The Philip Cross account was created at precisely 18:48 GMT on 26 October 2004. Since then, he’s made more than 130,000 edits to more 30,000 pages (sic). That’s a substantial amount, but not hugely unusual – it’s not enough edits, for example, to put him in the top 300 editors on Wikipedia.
But it’s what he edits which has preoccupied anti-war politicians and journalists. In his top 10 most-edited pages are the jazz musician Duke Ellington, The Sun newspaper, and Daily Mail editor Paul Dacre. But also in that top 10 are a number of vocal critics of American and British foreign policy: the journalist John Pilger, Labour Party leader Jeremy Corbyn and Corbyn’s director of strategy, Seamus Milne.
His critics also say that Philip Cross has made favourable edits to pages about public figures who are supportive of Western military intervention in the Middle East. …
“His edits are remorselessly targeted at people who oppose the Iraq war, who’ve opposed the subsequent intervention wars … in Libya and Syria, and people who criticise Israel,” Galloway says.
« First « Previous Comments 73 - 112 of 240 Next » Last » Search these comments
I mean sure AI ChatGPT is interesting, but I don't think it's anymore self aware than an Ad Lib Mad Lib book, if anyone remembers those.
https://www.breitbart.com/tech/2023/01/25/analysis-chatgpt-ai-demonstrates-leftist-bias/
Like any trustworthy good buddy, lying to your face about their intentional bias would.