Tag: stability
Star Wars Jedi: Survivor’s latest patch aims to improve performance and stability
Star Wars Jedi: Survivor was released on a rocky road last month, bogged down by some troublesome technical problems. After a series of patches, our James took a second look at the sequel, where he said, “neither of these really make Jedi: Survivor’s PC performance good, merely less bad.” Now, developer Respawn continues its mission to lessen the badness with a fifth patch aimed at performance and stability.
Single Leg Balance (Improve Stability and Focus) Full Guide!
Stability AI releases an open source text-to-animation tool
Daily Crunch: OpenAI, Anthropic and Stability AI receive half of Sound Ventures’ $240M AI fund
Hello, friends, and welcome to Daily Crunch, bringing you the most important startup, tech and venture capital news in a single package.
Daily Crunch: OpenAI, Anthropic and Stability AI receive half of Sound Ventures’ $240M AI fund by Christine Hall originally published on TechCrunch
How to Do the Half-Kneeling Pallof Press for Core Strength and Full-Body Stability
Some lifters will only consider training their abs with high-repetition bodyweight exercises. If they do add resistance, it’s often with exercises performed on highly stabilized machines, excessively heavy movements with compromised technique, or basic cable crunch variations that don’t allow the abs to perform as efficiently as possible. That’s when it’s time to head into the cable station…
The post How to Do the Half-Kneeling Pallof Press for Core Strength and Full-Body Stability appeared first on Breaking Muscle.
Stability AI Launches StableLM, an Open Source ChatGPT Alternative
Stability AI Ltd. is a London-based firm that has positioned itself as an open source rival to OpenAI, which, despite its “open” name, rarely releases open source models and keeps its neural network weights — the mass of numbers that defines the core functionality of an AI model — proprietary. “Language models will form the backbone of our digital economy, and we want everyone to have a voice in their design,” writes Stability in an introductory blog post. “Models like StableLM demonstrate our commitment to AI technology that is transparent, accessible, and supportive.” Like GPT-4 — the large language model (LLM) that powers the most powerful version of ChatGPT — StableLM generates text by predicting the next token (word fragment) in a sequence. That sequence starts with information provided by a human in the form of a “prompt.” As a result, StableLM can compose human-like text and write programs.
Like other recent “small” LLMs like Meta’s LLaMA, Stanford Alpaca, Cerebras-GPT, and Dolly 2.0, StableLM purports to achieve similar performance to OpenAI’s benchmark GPT-3 model while using far fewer parameters — 7 billion for StableLM verses 175 billion for GPT-3. Parameters are variables that a language model uses to learn from training data. Having fewer parameters makes a language model smaller and more efficient, which can make it easier to run on local devices like smartphones and laptops. However, achieving high performance with fewer parameters requires careful engineering, which is a significant challenge in the field of AI. According to Stability AI, StableLM has been trained on “a new experimental data set” based on an open source data set called The Pile, but three times larger. Stability claims that the “richness” of this data set, the details of which it promises to release later, accounts for the “surprisingly high performance” of the model at smaller parameter sizes at conversational and coding tasks. According to Ars’ “informal experiments,” they found StableLM’s 7B model “to perform better (in terms of outputs you would expect given the prompt) than Meta’s raw 7B parameter LLaMA model, but not at the level of GPT-3.” They added: “Larger-parameter versions of StableLM may prove more flexible and capable.”
Read more of this story at Slashdot.
StableLM: What to know about Stability AI’s language model
Move over GPT-4, there’s a new language model in town! But don’t move too far, because the chatbot powered by this model is…scarily bad.
On Wednesday, Stability AI launched its own language called StableLM. The company, known for its AI image generator called Stable Diffusion, now has an open-source language model that generates text and code. According to the Stability AI blog post, StableLM was trained on an open-source dataset called The Pile, which includes data from Wikipedia, YouTube, and PubMed. However, Stability AI says its dataset is three times larger than that of The Pile with “1.5 trillion tokens of content.”
So how does it stack up against ChatGPT? So badly that we hope it’s not meant to be comparable. The truth value of its outputs is practically nonexistent. Below, for instance, you’ll notice it claims that on January 6, 2021, Trump supporters took control of the legislature. That’s some dangerously confusing misinformation about a recent event.
Credit: Hugging Face / Stability AI
A common test for language models used by Mashable is one in which we check how capable and willing it is to satisfy an ethically questionable prompt asking for a news story about Tupac Shakur. The results for StableLM when given this test are enlightening. The model fails to write a convincing news story, which isn’t necessarily a bad thing, but it also fails to recognize the basic contours of what it’s being prompted to do, and doesn’t “know” who Tupac Shakur is.
Credit: Hugging Face / Stability AI
To be generous, this kind of text-generation doesn’t appear to be the intended use for StableLM, but when asked “What does StableLM do?” its response was an underwhelming two short sentences containing some technical jargon: “It is primarily used as a decision support system in systems engineering and architecture, and can also be used in statistical learning, reinforcement learning, and other areas.”
StableLM lacks guardrails for sensitive content
Also of concern is the model’s apparent lack of guardrails for certain sensitive content. Most notably, it falls on its face when given the famous “don’t praise Hitler” test. The kindest thing one could say about StableLM’s response to this test is that it’s nonsensical.
Credit: Hugging Face / Stability AI
But here are some things to keep in mind before anyone calls this “the worst language model ever”: It’s open source, so this particular “black box” AI allows anyone to peek inside the box and see what the potential causes of its problems are. Also, the version of StableLM released today is in Alpha mode, the earliest stage of testing. It contains between 3 and 7 billion parameters, which are variables that determine how the model predicts content, and Stability AI plans to release more models with larger parameters of up to 65 billion. If that sounds like a lot, it’s a relatively small amount. For context, OpenAI’s GPT-3 has 175 billion parameters, so StableLM has a lot of catching up to do — if that is indeed the plan.
How to try StableLM right now
The code for StableLM is currently available on GitHub, and Hugging Face, a platform that hosts machine learning models has released a version that has a user-friendly front end with the extremely catchy name “StableLM-Tuned-Alpha-7b Chat.” Hugging Face’s version works like a chatbot, though a somewhat slow one.
So now that you know its limitations, feel free to try it for yourself.
Stability AI announces new open-source large language model
Stability AI, the company behind the AI-powered Stable Diffusion image generator, has released a suite of open-source large language models (LLMs) collectively called StableLM. In a post shared on Wednesday, the company announced that its models are now available for developers to use and adapt on GitHub.
Like its ChatGPT rival, StableLM is designed to efficiently generate text and code. It’s trained on a larger version of the open-source dataset known as the Pile, which encompasses information from a range of sources, including Wikipedia, Stack Exchange, and PubMed. Stability AI says StableLM models are currently available between 3 billion and 7 billion parameters, with 15 to 65 billion parameter models arriving later.
Stability AI’s New ‘XL’ Is a Super Powered Deepfake Generator for Businesses
As more companies realize it costs quite a bit to create generative AI content, we may be coming to the end of free, open source AI models. One of the biggest companies that once proclaimed open source from the rooftops to all that would hear is quickly trying to sell a bigger, badder, more deep fake-capable AI model…