Tag: regulated
OpenAI CTO Says AI Systems Should ‘Absolutely’ Be Regulated
Murati specifically discussed OpenAI’s approach to AGI with “human-level capability.”
OpenAI’s specific vision around it is to build it safely and figure out how to build it in a way that’s aligned with human intentions, so that the AI systems are doing the things that we want them to do, and that it maximally benefits as many people out there as possible, ideally everyone.
Q: Is there a path between products like GPT-4 and AGI?
A: We’re far from the point of having a safe, reliable, aligned AGI system. Our path to getting there has a couple of important vectors. From a research standpoint, we’re trying to build systems that have a robust understanding of the world similarly to how we do as humans. Systems like GPT-3 initially were trained only on text data, but our world is not only made of text, so we have images as well and then we started introducing other modalities.
The other angle has been scaling these systems to increase their generality. With GPT-4, we’re dealing with a much more capable system, specifically from the angle of reasoning about things. This capability is key. If the model is smart enough to understand an ambiguous direction or a high-level direction, then you can figure out how to make it follow this direction. But if it doesn’t even understand that high-level goal or high-level direction, it’s much harder to align it. It’s not enough to build this technology in a vacuum in a lab. We really need this contact with reality, with the real world, to see where are the weaknesses, where are the breakage points, and try to do so in a way that’s controlled and low risk and get as much feedback as possible.
Q: What safety measures do you take?
A: We think about interventions at each stage. We redact certain data from the initial training on the model. With DALL-E, we wanted to reduce harmful bias issues we were seeing… In the model training, with ChatGPT in particular, we did reinforcement learning with human feedback to help the model get more aligned with human preferences. Basically what we’re trying to do is amplify what’s considered good behavior and then de-amplify what’s considered bad behavior.
One final quote from the interview: “Designing safety mechanisms in complex systems is hard… The safety mechanisms and coordination mechanisms in these AI systems and any complex technological system [are] difficult and require a lot of thought, exploration and coordination among players.”
Read more of this story at Slashdot.
Nuclear Fusion Won’t Be Regulated in the US the Same Way as Nuclear Fission
The top regulatory agency for nuclear materials safety in the U.S. voted unanimously to regulate the burgeoning fusion industry differently than the nuclear fission industry, and fusion startups are celebrating that as a major win. As a result, some provisions specific to fission reactors, like requiring funding to cover claims from nuclear meltdowns, won’t apply to fusion plants. (Fusion reactors cannot melt down….)
Other differences include looser requirements around foreign ownership of nuclear fusion plants, and the dispensing of mandatory hearings at the federal level during the licensing process, said Andrew Holland, CEO of the industry group, the Fusion Industry Association… The approach to regulating fusion is akin to the regulatory regime that is currently used to regulate particle accelerators, which are machines that are capable of making elementary nuclear particles, like electrons or protons, move really fast, the Fusion Industry Association says…
Technically speaking, fusion will be regulated under Part 30 of the Code of Federal Regulations, Jeff Merrifield, a former NRC commissioner, told CNBC. The regulatory structure for nuclear fission is under Part 50 of that code. “The regulatory structure needed to regulate particle accelerators under Part 30, is far simpler, less costly and more efficient than the more complicated rules imposed on fission reactors under Part 50,” Merrifield told CNBC. “By making this decision to use the Part 30, the commission recognized the decreased risk of fusion technologies when compared with traditional nuclear reactors and has imposed a framework that more appropriately aligns the risks and the regulations,” he said.
“Private fusion companies have raised about $5 billion to commercialize and scale fusion technology,” the article points out, “and so the decision from the NRC on how the industry would be regulated is a big deal for companies building in the space.” And they shared three reactions from the commercial fusion industry:
The CEO of the industry group, the Fusion Industry Association told CNBC the decision was
“extremely important.”
The scientific director for fusion startup Focused Energy told CNBC the decision “removes a major area of uncertainty for the industry.”
The general counsel for nuclear fusion startup Helion told CNBC. “It is now incumbent on us to demonstrate our safety case as we bring fusion to the grid, and we look forward to working with the public and regulatory community closely on our first deployments.”
Read more of this story at Slashdot.
How Should AI Be Regulated?
What they tell me is obvious to anyone watching. Competition is forcing them to go too fast and cut too many corners. This technology is too important to be left to a race between Microsoft, Google, Meta and a few other firms. But no one company can slow down to a safe pace without risking irrelevancy. That’s where the government comes in — or so they hope… [A]fter talking to a lot of people working on these problems and reading through a lot of policy papers imagining solutions, there are a few categories I’d prioritize.
The first is the question — and it is a question — of interpretability. As I said above, it’s not clear that interpretability is achievable. But without it, we will be turning more and more of our society over to algorithms we do not understand… The second is security. For all the talk of an A.I. race with China, the easiest way for China — or any country for that matter, or even any hacker collective — to catch up on A.I. is to simply steal the work being done here. Any firm building A.I. systems above a certain scale should be operating with hardened cybersecurity. It’s ridiculous to block the export of advanced semiconductors to China but to simply hope that every 26-year-old engineer at OpenAI is following appropriate security measures.
The third is evaluations and audits. This is how models will be evaluated for everything from bias to the ability to scam people to the tendency to replicate themselves across the internet. Right now, the testing done to make sure large models are safe is voluntary, opaque and inconsistent. No best practices have been accepted across the industry, and not nearly enough work has been done to build testing regimes in which the public can have confidence. That needs to change — and fast.
The piece also recommends that AI-design companies “bear at least some liability for what their models.” But what legislation should we see — and what legislation will we see? “One thing regulators shouldn’t fear is imperfect rules that slow a young industry,” the piece argues.
“For once, much of that industry is desperate for someone to help slow it down.”
Read more of this story at Slashdot.
Regulated ETFs could save crypto from crashing
Exchange-traded funds (ETFs) are widely considered low-risk because they are affordable and diversified as they hold a basket of securities…
The post Regulated ETFs could save crypto from crashing appeared first on TechRound.
Inseego, CyberReef partner to secure 5G networks in regulated industries
Why established and regulated industries are shifting to cloud services
Coinsquare chief operating officer shares thoughts on being the first regulated crypto dealer exchange in Canada
The past actions of bad actors has forced the country’s regulators to take a tough stance on crypto exchanges.