Tag: companies
I Created a Biased AI Algorithm 25 Years Ago—Tech Companies Are Still Making the Same Mistake.
In 1998, I unintentionally created a racially biased artificial intelligence algorithm. There are lessons in that story that resonate even more strongly today.
Three Companies Faked Millions of Comments Supporting 2017 Repeal of ‘Net Neutrality’ Rules
Their investigation “found that the fake comments used the identities of millions of consumers, including thousands of New Yorkers, without their knowledge or consent,” as well as “widespread fraud and abusive practices”
Collectively, the three companies have agreed to pay $615,000 in penalties and disgorgement. This is the second series of agreements secured by Attorney General James with companies that supplied fake comments to the FCC… As detailed in a report by the Office of the Attorney General, the nation’s largest broadband companies funded a secret campaign to generate millions of comments to the FCC in 2017. These comments provided “cover” for the FCC to repeal net neutrality rules. To help generate these comments, the broadband industry engaged commercial lead generators that used advertisements and prizes, like gift cards and sweepstakes entries, to encourage consumers to join the campaign.
However, nearly every lead generator that was hired to enroll consumers in the campaign instead simply fabricated consumers’ responses. As a result, more than 8.5 million fake comments that impersonated real people were submitted to the FCC, and more than half a million fake letters were sent to Congress. Two of the companies, LCX and Lead ID, were each engaged to enroll consumers in the campaign. Instead, each independently fabricated responses for 1.5 million consumers. The third company, Ifficient, acted as an intermediary, engaging other lead generators to enroll consumers in the campaign. Ifficient supplied its client with more than 840,000 fake responses it had received from the lead generators it had hired.
The Office of the Attorney General’s investigation also revealed that the fraud perpetrated by the various lead generators in the net neutrality campaign infected other government proceedings as well. Several of the lead generation firms involved in the broadband industry’s net neutrality comment campaigns had also worked on other, unrelated campaigns to influence regulatory agencies and public officials. In nearly all of these advocacy campaigns, the lead generation firms engaged in fraud. As a result, more than 1 million fake comments were generated for other rulemaking proceedings, and more than 3.5 million fake digital signatures for letters and petitions were generated for federal and state legislators and government officials across the nation.
LCX and Lead ID were responsible for many of these fake comments, letters, and petition signatures. Across four advocacy campaigns in 2017 and 2018, LCX fabricated consumer responses used in approximately 900,000 public comments submitted to the Environmental Protection Agency (EPA) and the Bureau of Ocean Energy Management (BOEM) at the U.S. Department of the Interior. Similarly, in advocacy campaigns between 2017 and 2019, Lead ID fabricated more than half a million consumer responses. These campaigns targeted a variety of government agencies and officials at the federal and state levels…
LCX and its principals will pay $400,000 in penalties and disgorgement to New York and $100,000 to the San Diego District Attorney’s Office.
Thanks to Slashdot reader gkelley for sharing the news.
Read more of this story at Slashdot.
EU Crypto Tax Plans Include NFTs, Foreign Companies, Draft Text Shows
The bill, dated May 5, closely matches proposals made by the European Commission in December 2022, as part of a bid to stop EU residents stashing crypto abroad to hide it from the taxman. The commission would have to set up a register of crypto asset operators’ by December 2025, bringing forward a previous deadline by one year, and the rules will apply as of Jan. 1, 2026. Controversially, the law — known as the eighth directive on administrative cooperation (DAC8) — still includes platforms for trading non-fungible tokens that can be used for payment or investment, and providers from outside the bloc that have EU clients.
Read more of this story at Slashdot.
H2O AI launches H2OGPT and LLM Studio to help companies make their own chatbots
Florida Bill Aims to Protect Space Companies From Getting Sued by Their Passengers
The private space industry has set up shop at Florida’s Cape Canaveral and the state is looking out for its billionaire space entrepreneurs. The Florida House passed a bill that would protect commercial space ventures against legal liability in the event of a crew member’s injury or death.
FTC warns tech companies against AI shenanigans that harm consumers
Since its establishment in 1914, the US Federal Trade Commission has stood as a bulwark against the fraud, deception, and shady dealings that American consumers face every day — fining brands that “review hijack” Amazon listings, making it easier to cancel magazine subscriptions and blocking exploitative ad targeting. On Monday, Michael Atleson, Attorney, FTC Division of Advertising Practices, laid out both the commission’s reasoning for how emerging generative AI systems like ChatGPT, Dall-E 2 could be used to violate the FTC Act’s spirit of unfairness, and what it would do to companies found in violation.
“Under the FTC Act, a practice is unfair if it causes more harm than good,” Atleson said. “It’s unfair if it causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition.”
He notes that the new generation of chatbots like Bing, Bard and ChatGPT can be used to influence the user’s, “beliefs, emotions, and behavior.” We’ve already seen them employed as negotiators within Walmart supply network and as talk therapists, both occupations specifically geared towards influencing those around you. When combined with the common effects of automation bias, wherein users more readily the accept the word of a presumably impartial AI system, and anthropomorphism. “People could easily be led to think that they’re conversing with something that understands them and is on their side,” Atleson argued.
He concedes that the issues surrounding generative AI technology go far beyond the FTC’s immediate purview, but reiterates that it will not tolerate unscrupulous companies from using it to take advantage of consumers. “Companies thinking about novel uses of generative AI, such as customizing ads to specific people or groups,” the FTC lawyer warned, “should know that design elements that trick people into making harmful choices are a common element in FTC cases, such as recent actions relating to financial offers, in-game purchases, and attempts to cancel services.”
The FTC’s guardrails also apply to placing ads within a generative AI application, not unlike how Google inserts ads into its search results. “People should know if an AI product’s response is steering them to a particular website, service provider, or product because of a commercial relationship,” Atleson wrote. “And, certainly, people should know if they’re communicating with a real person or a machine.”
Finally, Atleson leveled an unsubtle warning to the tech industry. “Given these many concerns about the use of new AI tools, it’s perhaps not the best time for firms building or deploying them to remove or fire personnel devoted to ethics and responsibility for AI and engineering,” he wrote. “If the FTC comes calling and you want to convince us that you adequately assessed risks and mitigated harms, these reductions might not be a good look.” That’s a lesson Twitter already learned the hard way.
This article originally appeared on Engadget at https://www.engadget.com/ftc-warns-tech-companies-against-ai-shenanigans-that-harm-consumers-175851417.html?src=rss
The White House is examining how companies use AI to monitor workers
The Biden administration is preparing to examine how companies use artificial intelligence to monitor and manage workers. According to Bloomberg, the White House will publish a blog post later today that invites American workers to share how automated tools are being used in their workplaces.
“While these technologies can benefit both workers and employers in some cases, they can also create serious risks to workers,” the post states, per Bloomberg. “The constant tracking of performance can push workers to move too fast on the job, posing risks to their safety and mental health.” Citing media reports, the White House adds the technology has also been used to deter workers from organizing their workplaces and to perpetuate pay and discipline discrimination.
The blog post calls for input from a variety of stakeholders, including researchers, advocacy groups and even employers. Notably, the Biden administration says it wants to know what regulations and enforcement action the federal government should implement to address the “economic, safety, physical, mental and emotional impacts” of workplace surveillance tech.
The call for information comes after a handful of states passed laws against unreasonable productivity quotas. Specifically, New York’s Warehouse Worker Protection Act grants workers the right to request information on their quota at any time. It also prohibits companies from imposing productivity demands that interfere with an employee’s state-mandated meal and restroom breaks.
This article originally appeared on Engadget at https://www.engadget.com/the-white-house-is-examining-how-companies-use-ai-to-monitor-workers-174217114.html?src=rss
Washington Passes Law Requiring Consent Before Companies Collect Health Data
Under Washington’s new law, which comes into effect in March 2024, medical apps and sites must ask a user for permission to collect their health data in a nondeceptive manner that “openly communicates a consumer’s freely given, informed, opt-in, voluntary, specific, and unambiguous written consent.” The site and apps must also disclose what kind of data they plan to collect and if they plan to sell it. Additionally, the bill will block medical providers from using geofencing to collect location information about the patients that visit the facility.
Read more of this story at Slashdot.
Tech Companies Allegedly Conspired to Game the H1-B Visa Lottery System
Tech companies have reportedly found a loophole in the highly coveted H1-B visa lottery system for prospective employees. The Biden Administration says they’ve found evidence that a small number of companies have joined together to exploit the H1-B lottery by entering foreign employees’ names numerous times to…