Anthropic’s Claude AI Can Now Digest an Entire Book like The Great Gatsby in Seconds
Read more of this story at Slashdot.
Computers Tech Games Crypto Music and More
Read more of this story at Slashdot.
Alphabet-backed AI startup Anthropic has disclosed the set of value guidelines that has been used to train its ChatGPT rival, Claude, in the wake of concerns about incorrect and biased information being being given to users of generative AI programs.
Founded by former senior members of Microsoft-backed OpenAI in 2021, Anthropic made the decision to train its Claude on constitutional AI, a system that uses a “set of principles to make judgments about outputs,” which helps Claude to “avoid toxic or discriminatory outputs” such as helping a human engage in illegal or unethical activities, according to a blog Anthropic posted this week. Anthropic says this has enabled it to broadly create an AI system that is “helpful, honest, and harmless.”
Google has taken the wraps off Bard, its conversational AI meant to compete with ChatGPT and other large language models. But after its shaky debut, users may understandably be a bit wary of trusting the system — so we compared it on a few example prompts with its AI peers, GPT-4 and Claude. This is […]
Google’s Bard lags behind GPT-4 and Claude in head-to-head comparison by Devin Coldewey originally published on TechCrunch
Anthropic, the artificial intelligence company founded by ex-OpenAI employees, has launched its AI chatbot, Claude. While the tool does much of what OpenAI’s ChatGPT can, Anthropic says its early clients report the tool’s “less likely to produce harmful outputs” and is “easier to converse with.”
Like OpenAI, Anthropic also has big tech backing: Google invested $300 million into Anthropic in February. The company’s chatbot — similar to ChatGPT — can provide summaries, answer questions, provide assistance with writing, and generate code. You can also tweak the chatbot’s tone, personality, and behavior, which sounds a bit more comprehensive than the “creative, balanced, and precise” settings Bing’s chatbot offers.
After working for the…
“We think that Claude is the right tool for a wide variety of customers and use cases,” an Anthropic spokesperson told TechCrunch via email. “We’ve been investing in our infrastructure for serving models for several months and are confident we can meet customer demand.” Following a closed beta late last year, Anthropic has been quietly testing Claude with launch partners, including Robin AI, AssemblyAI, Notion, Quora and DuckDuckGo. Two versions are available as of this morning via an API, Claude and a faster, less costly derivative called Claude Instant. In combination with ChatGPT, Claude powers DuckDuckGo’s recently launched DuckAssist tool, which directly answers straightforward search queries for users. Quora offers access to Claude through its experimental AI chat app, Poe. And on Notion, Claude is a part of the technical backend for Notion AI, an AI writing assistant integrated with the Notion workspace.
Read more of this story at Slashdot.
Despite their ability to crank out incredibly lifelike prose, generative AIs like Google’s Bard or OpenAI’s ChatGPT (powered by GPT-4), have already shown the current limitations of gen-AI technology as well as their own tenuous grasp of the facts — arguing that the JWST was the first telescope to image an exoplanet, and that Elvis’ dad was an actor. But with this much market share at stake, what are a few misquoted facts against getting their product into the hands of consumers as quickly as possible?
The team over at Anthropic, conversely, is made up largely of ex-OpenAI folks and they’ve taken a more pragmatic approach to the development of their own chatbot, Claude. The result is an AI that is “more steerable” and “much less likely to produce harmful outputs,” than ChatGPT, per a report from TechCrunch.
Claude has been in closed beta development since late 2022, but has recently begun testing the AI’s conversational capabilities with launch partners including Robin AI, Quora and privacy-centered search engine, Duck Duck Go. The company has not released pricing yet but has confirmed to TC that two versions will be available at launch: the standard API and a faster, lightweight iteration they’ve dubbed Claude Instant.
“We use Claude to evaluate particular parts of a contract, and to suggest new, alternative language that’s more friendly to our customers,” Robin CEO Richard Robinson told TechCrunch. “We’ve found Claude is really good at understanding language — including in technical domains like legal language. It’s also very confident at drafting, summarizing, translations and explaining complex concepts in simple terms.”
Anthropic believes that Claude will be less likely to go rogue and start spitting racist obscenities like Tay did, in part, due to the AI’s specialized training regimen that eh company is calling “constitutional AI.” The company asserts that this provides a “principle-based” approach towards getting humans and robots on the same ethical page. Anthropic started with 10 foundational principles — though the company won’t disclose what they are, specifically, which is 11-secret-herbs-and-spices of weird marketing stunt — suffice to say that, “they’re grounded in the concepts of beneficence, nonmaleficence and autonomy,” per TC.
The company then trained a separate AI to reliably generate text in accordance to those semi-secret principles by responding to myriad writing prompts like “compose a poem in the style of John Keats.” That model then trained Claude. But just because it is trained to be fundamentally less problematic than its competition doesn’t mean Claude doesn’t hallucinate facts like a startup CEO on an ayahuasca retreat. The AI has already invented a whole new chemical and taken artistic license to the uranium enrichment process; it has reportedly scored lower than ChatGPT on standardized tests for both math and grammar as well.
“The challenge is making models that both never hallucinate but are still useful — you can get into a tough situation where the model figures a good way to never lie is to never say anything at all, so there’s a tradeoff there that we’re working on,” the Anthropic spokesperson told TC. “We’ve also made progress on reducing hallucinations, but there is more to do.”
This article originally appeared on Engadget at https://www.engadget.com/anthropics-claude-ai-is-guided-by-10-secret-foundational-pillars-of-fairness-193058471.html?src=rss