Extremely Bare-Bones $20,000 Electric Pickup Truck Doesn’t Even Have a Radio

A Michigan-based startup called Slate Auto has shown off an extremely affordable, all-electric pickup truck.

A Michigan-based startup called Slate Auto has shown off an extremely affordable, all-electric pickup truck.

By far the most eye-catching figure related to the sleek two-seater Slate Truck is its cost: just $20,000 — before federal EV incentives.

But you get what you pay for. The truck is as bare-bones as it gets, lacking even a radio, speaker system, or touchscreen. Its body panels are molded plastic, its range is a middling 150 miles, its wheels are basic steelies, and the seats are uninspired fabric.

However, the company is betting big on customizability, selling a range of more than 100 accessory items that could turn the vehicle into far more flexible vehicle, like a four-seater SUV with a functioning sound system.

If it sounds a bit like a functional off-brand you'd buy on Amazon, you might be onto something; the e-retail giant's founder Jeff Bezos is reportedly backing the company.

All told, it's an intriguing offering that subverts the prevailing EV formula of lavish specs and prices. A Rivian R1T goes for over $70,000, while a Ford F-150 Lightning, the electric successor to the best-selling vehicle sold in the US for decades, starts at around $50,000. And that's without getting into Tesla's divisive Cybertruck, which was supposed to cost $40,000 but ended up going for an opulent $60,000 instead.

The timing of the announcement is also noteworthy. The Trump administration's tariff war has been disastrous for the auto industry, with experts accusing the president of trying to "break" the sector.

Trump has also vowed to end Biden-era EV tax incentive programs. However, whether the $7,500 federal tax credit for EVs and plug-ins will go away remains unclear.

Even Tesla CEO Elon Musk has contributed to a less favorable market environment, gutting a Department of Energy loans program that once helped his EV maker to survive.

Like all would-be automakers, Slate will face immense challenges in bringing the vehicle to market, not to mention anywhere near the scale at which its much larger rivals operate.

Besides, do truck buyers want this extreme level of modularity in a country where luxury and a barrage of features have reigned supreme?

As The Verge points out, many other failed EV startups have succumbed to the harsh realities of starting up extremely complex production lines.

Slate’s chief commercial officer, Jeremy Snyder, told The Verge that the company has several key advantages over previous attempts, stripping even the manufacturing process down to a bare minimum.

"We have no paint shop, we have no stamping," he said. "Because we only produce one vehicle in the factory with zero options, we’ve moved all of the complexity out of the factory."

Only time will tell if Slate will be able to deliver on its promises and meet preorders by late 2026.

One thing's for sure: it has one key advantage right off the bat: it's not a Cybertruck and isn't associated in any way with Tesla and Musk's increasingly toxic brands.

More on electric pickups: Elon Musk Is Shutting Down the Part of the Government That Helped Him Save Tesla

The post Extremely Bare-Bones $20,000 Electric Pickup Truck Doesn't Even Have a Radio appeared first on Futurism.

More here:
Extremely Bare-Bones $20,000 Electric Pickup Truck Doesn't Even Have a Radio

Startup Mocked for Charging $5,000 to "Edit" Book Manuscripts Using AI

Startup company Spines wants to publish 8,000 books in 2025 by using AI. Before that can happen, Spines should stop embarrassing itself.

Let Him Book

A startup called Spines apparently wants to use AI to edit and publish 8,000 books in 2025 — though no word on whether they'll be any good.

There are several issues with the premise. First, AI is a notoriously untalented wordsmith. It will undoubtedly struggle with the myriad tasks Spines assigns to it, including "proofreads, cover designs, formats, publishes, and... distributing your book in just a couple of weeks," according to the venture's website

Oh, and then there's the issue of Spines embarrassing itself publicly. 

"A great example of how no one can find actual uses for LLMs that aren't scams for grifts," short story writer Lincoln Michel wrote of the flap on X-formerly-Twitter. "Quite literally the LAST thing publishing needs is... AI regurgitations."

Author Rowan Coleman agreed.

"The people behind Spines AI publishing are spineLESS," Coleman posted on the same site. "They don’t care about books, don’t care about art, don’t care about the instinctive human talent it takes to write, edit and produce a book. They want the magic, without the work."

Feral Page

Spines CEO and cofounder Yehuda Niv told The Bookseller, a UK book business magazine, that Spines had already published seven "bestsellers." But when Spines was pressed to provide sales numbers, a company representative claimed the "data is private and belongs to the author." Hm, suspicious. 

Niv also promised The Bookseller that Spines "isn't self-publishing, is not a traditional publisher and is not a vanity publisher." That's despite the fact that Spines' website, which sells publishing plans from between $1,500 to $4,400, advertises to customers who are clearly looking to team up with an inexpensive vanity publisher.

"I sent my book to 17 different publishers and got rejected every time, and vanity publishers quoted me between $11,000 to $17,000," said on Spines' website the author of Spines' "Biological Transcendence and the Tao: An Exposé on the Potential to Alleviate Disease and Ageing and the Considerations of Age-Old Wisdom," which doesn't currently have a single Amazon review. "With Spines, I got my book published in less than 30 days!" 

Hm, interesting. That testimonial makes Spines sound an awful lot like a vanity publisher.

AI startups love to reinvent the wheel and claim it's never been done before. Like an ed tech startup founder who used AI to cover for her run-of-the-mill embezzlement, or a Finnish AI company which put a high tech twist on the common practice of exploiting incarcerated workers. 

Will it work for books? We'll be watching.

More on AI: Character.AI Is Hosting Pro-Anorexia Chatbots That Encourage Young People to Engage in Disordered Eating

The post Startup Mocked for Charging $5,000 to "Edit" Book Manuscripts Using AI appeared first on Futurism.

Read the original:
Startup Mocked for Charging $5,000 to "Edit" Book Manuscripts Using AI

Government Test Finds That AI Wildly Underperforms Compared to Human Employees

A series of blind assessments found that human-written summaries scored significantly better than summaries generated by AI.

Sums It Up

Generative AI is absolutely terrible at summarizing information compared to humans, according to the findings of a trial for the Australian Securities and Investment Commission (ASIC) spotted by Australian outlet Crikey.

The trial, conducted by Amazon Web Services, was commissioned by the government regulator as a proof of concept for generative AI's capabilities, and in particular its potential to be used in business settings.

That potential, the trial found, is not looking promising.

In a series of blind assessments, the generative AI summaries of real government documents scored a dire 47 percent on aggregate based on the trial's rubric, and were decisively outdone by the human-made summaries, which scored 81 percent.

The findings echo a common theme in reckonings with the current spate of generative AI technology: not only are AI models a poor replacement for human workers, but their awful reliability means it's unclear if they'll have any practical use in the workplace for the majority of organizations.

Signature Shoddiness

The assessment used Meta's open source Llama2-70B, which isn't the newest model out there, but with up to 70 billion parameters, it's certainly a capable one.

The AI model was instructed to summarize documents submitted to a parliamentary inquiry, and specifically to focus on what was related to ASIC, such as where the organization was mentioned, and to include references and page numbers. Alongside the AI, human employees at ASIC were asked to write summaries of their own.

Then five evaluators were asked to assess the human and the AI-generated summaries after reading the original documents. These were done blindly — the summaries were simply labeled A and B — and scorers had no clue that AI was involved at all.

Or at least, they weren't supposed to. At the end, when the assessors had finished up and were told about the true nature of the experiment, three said that they suspected they were looking at AI outputs, which is pretty damning on its own.

Sucks On All Counts

All in all, the AI performed lower on all criteria compared to the human summaries, the report said.

Strike one: the AI model was flat-out incapable of providing the page numbers of where it got its information.

That's something the report notes can be fixed with some tinkering with the AI model. But a more fundamental issue was that it regularly failed to pick up on nuance or context, and often made baffling choices about what to emphasize or highlight.

Beyond that, the AI summaries tended to include irrelevant and redundant information and were generally "waffly" and "wordy."

The upshot: these AI summaries were so bad that the assessors agreed that using them could require more work down the line, because of the amount of fact-checking they require. If that's the case, then the purported upsides of using the technology — cost-cutting and time-saving — are seriously called into question.

More on AI: NaNoWriMo Slammed for Saying That Opposition to AI-Generated Books Is Ableist

The post Government Test Finds That AI Wildly Underperforms Compared to Human Employees appeared first on Futurism.

Excerpt from:
Government Test Finds That AI Wildly Underperforms Compared to Human Employees