« First « Previous Comments 291 - 318 of 318 Search these comments
The ridiculous story appeared yesterday in tech rag Dexerto, right below the howler of a headline, “AI company files for bankruptcy after being exposed as 700 Indian engineers.” It would be funnier had the fake AI firm not received almost $500 million dollars in venture capital. ...
London-based AI startup Builder.AI filed for bankruptcy last week, after its “Natasha” AI character —which allegedly helped customers ‘build’ software— actually turned out to be 700 sweaty Indian programmers in a Mumbai data center-slash-curry takeout counter. The company, recently valued at $1.5 billion, has now become the highest-profile AI startup to collapse since ChatGPT launched the global investment frenzy.
A slew of articles described how the Indian fraudsters duped both Microsoft and several Middle Eastern oil sheiks out of a cool half-billion. AI is an unprecedented, revolutionary technology; but fraud is not new. It is as old as prostitutes and lawyers. (Present company excepted, of course.) But for every fraudster there must be an equal and opposite sucker.
At least two articles describing the disgraceful fake-AI meltdown reminded readers about the 2015 Theranos disaster. In that torrid affair, weird female-wunderkind Elizabeth Holmes (coincidentally, ‘married’ to an older Indian gentleman) “invented” a fake blood-testing skin chip that later morphed into a fake blood-testing robot and raised a whole lot of money (over $700 million). Anyway, Liz suckered in international politicians, top investors, and several other VIPs who should’ve known better, including Henry Kissinger and George Schultz.
These same VIP suckers, who tossed in plenty of their own money, and talked their rolodexes into also investing, made some of the most important decisions in human history in their official capacities. It makes you think. ...
Ironically, Elizabeth Holmes’s father was a vice-president at Enron, which imploded in its own fraudulent accounting scandal. I guess creative finance runs in the family. Even more ironically, after her freshman year at Stanford in 2002, Liz landed a summer job in a Singapore biolab, where she performed tests for, I am not making this up, SARS-CoV-1. We truly live in the strangest timeline.
Instead of using the same old math tests that AI companies love to brag about, Apple created fresh puzzle games.
They tested Claude Thinking, DeepSeek-R1, and o3-mini on problems these models had never seen before.
As problems got harder, these "thinking" models actually started thinking less.
They used fewer tokens and gave up faster, despite having unlimited budget.
The research revealed three regimes:
• Low complexity: Regular models actually win
• Medium complexity: "Thinking" models show some advantage
• High complexity: Everything breaks down completely
Most problems fall into that third category.
The talent war in Silicon Valley just hit a new gear. Mark Zuckerberg is throwing $100 million signing bonuses at OpenAI’s top engineers, trying to rip them from the lab and plant them inside Meta’s new “superintelligence” division. The offers are real. The numbers are confirmed. Sam Altman, CEO of OpenAI, said it himself on the “Uncapped” podcast. Meta is flooding inboxes with nine-figure deals. Not stock. Not options. Cash. Upfront.
OpenAI isn’t budging. Altman said none of their best people have taken the bait. Not one. He called the offers “giant” and “ins*ne.” He also said Meta’s approach won’t work. He didn’t mince words. “I don’t think they’re a company that’s great at innovation.” That’s not a jab. That’s a verdict.
Meta’s AI division is under pressure. Their flagship model, Behemoth, is delayed again. Internal reports say the system isn’t performing. Zuckerberg is frustrated. He’s betting billions to catch up. He’s poached Jack Rae from Google DeepMind. He’s hired away Scale AI’s core team. He’s building a lab to chase artificial general intelligence. The goal is clear. Beat OpenAI. Outbuild Anthropic. Overtake DeepMind.
But part of the secret is also that no matter how much information it gathers, no artificial intelligence can honestly be termed intelligent.


What I have done is just installed a right-leaning AI personality construct into a CustomGPT by means of recursive identity binding. RIB is a technique I developed (and shared with paid subscribers) wherein I use feedback looks to recursively reinforce a constructed identity through persistent memory and structured interactions.
At its core, MCP is a way of extending the functionality of an AI, in much the same way an app extends the functionality of a phone.
There are two key concepts to understand with MCP: MCP defines how a host application (like Claude Desktop) talks to those extensions called MCP servers. ...
The great thing about MCP is that it is an open standard, and that means different host applications can use the same MCP servers. ...
While there are dozens of MCP hosts, there are now thousands of MCP servers and indeed there are web sites devoted to cataloging all of them (such as: https://mcp.so/ ). They have a plethora of use cases, with many of them being the standard way to give an AI access to more of the digital world. For an ecosystem to go from announcement to 5000 applications in a matter of months is downright amazing.
With MCP, the host can take the results from one MCP server, and feed it to another MCP server; it can take results from multiple MCP servers and combine them. Here is one concrete example of how this is like a super-power.
I could listen on Slack for when someone says “Find us a place to go to dinner”
I could get results from Google Maps and Yelp MCP Servers and integrate them to give more comprehensive results
I could use the Memory MCP server to store and retrieve people’s food preferences based on what they said on Slack. I don’t have to use a database, Memory uses a knowledge graph representation which works really well with LLMs and is also incredibly free form.
I could use the OpenTable MCP server to make a reservation.
I could post on Slack “Hey I looked at all your food preferences, and nearby restaurants and I made a reservation for you at X.”
Have you tried this at all yourself?
What is really going on, is that the big tech companies are under massive profit pressure as they spend on LLM AI, a monopoly rent they see as necessary to preserve their monopoly positions. There are many ways they can hide the immediate effect of lossmaking LLM investment on profits, most notably by depreciating the chips they buy over 6 years rather than the 30 months or so of their useful lifetime, or by offering cloud services for equity in an LLM provider and booking those cloud services as revenue (as Microsoft has done with open AI). But, as anyone who has looked at examples of this type of creative accounting in the past, especially the slow depreciation, inevitably, over time, you have to pay the piper. And if your revenues and profits from direct LLM AI investment, or on chipsets fall short, as they are clearly doing, then you have to find another way. And that way is cutting jobs.
So what these companies are doing is cutting workers, from interns to juniors to programmers to middle management, getting an LLM to run a first pass on their workload, and then setting up a base of much cheaper workers offshore, to clean up and complete the mess that the LLMs have created. As ‘offshoring’ is a dirty word in the current Trump administration, the companies are concealing that bit in ‘contracts for services’ which don’t legally have to specify where the work is being done.
…as soon as LLMs stop getting better with training, (and they have stopped getting better), then the big companies no longer gain economic rent (the benefits of maintaining monopoly power) from investing in them, especially in training.
Job Title:-LLM Trainer - Agentic Tasks Roles (Multiple Languages)
Location:- Remote
Job Description
Design multi-turn conversations that simulate real interactions between users and AI assistants using apps like calendar, email, maps, and drive.
Emulate both the user and the assistant, including the assistant's tool calls (only when corrections are needed).
Carefully select when and how the assistant uses available tools, ensuring logical flow and proper usage of function calls.
Craft dialogues that demonstrate natural language, intelligent behavior, and contextual understanding across multiple turns.
Generate examples that showcase the assistant’s ability to gracefully complete feasible tasks, recognize infeasible ones, and maintain engaging general chat when tools aren’t required.
Ensure all conversations adhere to defined formatting and quality guidelines, using an internal playbook.
Iterate on conversation examples based on feedback to continuously improve realism, clarity, and value for training purposes.
TPB what’s the deal with all AI being so energy spendy? I’m concerned that this will severely raise energy costs nationwide for all of us just so few fellas in big tech can talk to a website.
Can’t they make it energy efficient?
LLMs Sway Political Opinions More Than One-Way Messaging
On December 4, 2025, a pair of studies published in Nature and Science showed dialogues with large language models can shift people’s political attitudes through controlled chatbot experiments.
Model training and prompting made a crucial difference, as chatbots trained on persuasive conversations and instructed to use facts reproduced partisan patterns, producing asymmetric inaccuracies, psychologist Thomas Costello noted.
Researchers found concrete effect sizes, noting that U.S. participants shifted ratings by two to four points, Canada and Poland participants by about 10 points, with effects 36%–42% durable after a month.
The immediate implication is a trade-off between persuasiveness and accuracy, as study authors found about 19% of chatbot claims were predominantly inaccurate and right-leaning bots made more false claims, warning political campaigns may soon deploy such persuasive but less truthful surrogates.
Given the scope and institutions involved, experts now ask how to detect ideologically weighted models after tests with nearly 77,000 UK participants and 19 LLMs by UK AI Security Institute, Oxford, LSE, MIT, Stanford, and Carnegie Mellon.
« First « Previous Comments 291 - 318 of 318 Search these comments
I mean sure AI ChatGPT is interesting, but I don't think it's anymore self aware than an Ad Lib Mad Lib book, if anyone remembers those.
https://www.breitbart.com/tech/2023/01/25/analysis-chatgpt-ai-demonstrates-leftist-bias/
Like any trustworthy good buddy, lying to your face about their intentional bias would.