Connect with us

Tech

Feds, tech fall short on watching extremists, Senate says

Published

on

The FBI and the Department of Homeland Security are failing to adequately monitor domestic extremists, according to a new Senate report that also faulted social media platforms for encouraging the spread of violent and antigovernment content.

The report, issued Wednesday by the Senate Homeland Security panel, called on federal law enforcement to reassess its overall response to the threat of homegrown terrorism and extremism.

The report recommends creating new definitions for extremism that are shared between agencies, improved reporting on crimes linked to white supremacy and antigovernment groups, and better use of social media in an effort to prevent violence, said Sen. Gary Peters, the Michigan Democrat who chairs the committee.

Growing domestic extremism has been linked to the country’s widening political divide and a rise in distrust of institutions. Critics of social media’s role in radicalizing extremists say that misinformation and hate speech spread online is fueling the problem, and in some cases encouraging acts of real-world violence like the Jan. 6, 2021, attack on the U.S. Capitol.

“Folks who were looking at what was happening on social media should have known that something very bad could potentially go down on Jan. 6 here at the Capitol,” Peters said Wednesday on a conference call with reporters.

The FBI emailed a statement to The Associated Press defending its handling of domestic terrorism in response to the report. The agency has provided comprehensive reports to Congress on the threat of domestic extremism motivated by racism or antigovernment views and tracks it carefully, the agency said.

“They are among the FBI’s top threat priorities,” the agency said.

A DHS spokesperson responded similarly Wednesday, saying the agency uses a “community-based approach to prevent terrorism and targeted violence, and does so in ways that protect privacy, civil rights and civil liberties.”

The leaders of both agencies are scheduled to testify before Peters’ committee on Thursday, part of its annual hearing on domestic threats.

Efforts by federal law enforcement to use social media to track domestic extremism have prompted questions about civil liberties and the targeting of communities of color. Republicans have accused tech platforms, meanwhile, of using content moderation to censor conservative opinions.

Facebook, Twitter, TikTok and YouTube were all singled out in the report for encouraging harmful content by using algorithms designed to increase user engagement. Those algorithms often prioritize clicks over quality, potentially sending users down a rabbit hole of increasingly provocative material.

The report noted that tech companies often use content moderation tools to remove or flag extremist content after it’s already spread. They should change their algorithms and products to ensure they aren’t encouraging the content in the first place, the report recommended.

“The rise in domestic terrorism can be partially attributed to the proliferation of extremist content on social media platforms and the failure of companies to effectively limit it in favor of action that increase engagement on their platforms,” the report concluded.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published.

Tech

Facebook misled parents, failed to guard kids’ privacy, regulators say

Published

on

U.S. regulators say Facebook misled parents and failed to protect the privacy of children using its Messenger Kids app, including misrepresenting the access it provided to app developers to private user data.

As a result, The Federal Trade Commision on Wednesday proposed sweeping changes to a 2020 privacy order with Facebook — now called Meta — that would prohibit it from profiting from data it collects on users under 18. This would include data collected through its virtual-reality products. The FTC said the company has failed to fully comply with the 2020 order.

Meta would also be subject to other limitations, including with its use of face-recognition technology and be required to provide additional privacy protections for its users.

“Facebook has repeatedly violated its privacy promises,” said Samuel Levine, director of the FTC’s Bureau of Consumer Protection. “The company’s recklessness has put young users at risk, and Facebook needs to answer for its failures.”

Meta called the announcement a “political stunt.”

“Despite three years of continual engagement with the FTC around our agreement, they provided no opportunity to discuss this new, totally unprecedented theory. Let’s be clear about what the FTC is trying to do: usurp the authority of Congress to set industry-wide standards and instead single out one American company while allowing Chinese companies, like TikTok, to operate without constraint on American soil,” Meta said in a prepared statement. “We have spent vast resources building and implementing an industry-leading privacy program under the terms of our FTC agreement. We will vigorously fight this action and expect to prevail.”

Facebook launched Messenger Kids in 2017, pitching it as a way for children to chat with family members and friends approved by their parents. The app doesn’t give kids separate Facebook or Messenger accounts. Rather, it works as an extension of a parent’s account, and parents get controls, such as the ability to decide with whom their kids can chat.

At the time, Facebook said Messenger Kids wouldn’t show ads or collect data for marketing, though it would collect some data it said was necessary to run the service.

But child-development experts raised immediate concerns.

In early 2018, a group of 100 experts, advocates and parenting organizations contested Facebook’s claims that the app was filling a need kids had for a messaging service. The group included nonprofits, psychiatrists, pediatricians, educators and the children’s music singer Raffi Cavoukian.

“Messenger Kids is not responding to a need — it is creating one,” the letter said. “It appeals primarily to children who otherwise would not have their own social media accounts.” Another passage criticized Facebook for “targeting younger children with a new product.”

Facebook, in response to the letter, said at the time that the app “helps parents and children to chat in a safer way,” and emphasized that parents are “always in control” of their kids’ activity.

The FTC now says this has not been the case. The 2020 privacy order, which required Facebook to pay a $5 billion fine, required an independent assessor to evaluate the company’s privacy practices. The FTC said the assessor “identified several gaps and weaknesses in Facebook’s privacy program.”

The FTC also said Facebook, from late 2017 until 2019, “misrepresented that parents could control whom their children communicated with through its Messenger Kids product.”

“Despite the company’s promises that children using Messenger Kids would only be able to communicate with contacts approved by their parents, children in certain circumstances were able to communicate with unapproved contacts in group text chats and group video calls,” the FTC said.

As part of the proposed changes to the FTC’s 2020 order, Meta would also be required to pause launching new products and services without “written confirmation from the assessor that its privacy program is in full compliance” with the order.

Continue Reading

Tech

Elon Musk threatens to reassign NPR’s Twitter account

Published

on

WASHINGTON (AP) — Elon Musk threatened to reassign NPR’s Twitter account to “another company,” according to the non-profit news organization, in an ongoing spat between Musk and media groups since his $44 billion acquisition of Twitter last year.

“So is NPR going to start posting on Twitter again, or should we reassign @NPR to another company?” Musk wrote in one email late Tuesday to NPR reporter Bobby Allyn.

NPR stopped tweeting from its main account after Twitter abruptly labeled NPR’s main account as “ state-affiliated media ” last month, a term that’s also been used to identify outlets controlled or heavily influenced by authoritarian governments. Twitter then changed the label to “ government-funded media.”

NPR said that both labels were inaccurate and undermined its credibility — noting the nonprofit news company operates independently of the U.S. government. Federal funding from the Corporation for Public Broadcasting accounts for less than 1% of NPR’s annual operating budget, the company said.

The last tweets on NPR’s main account are from April 12 — when the news organization shared a thread of other places readers and listeners can find its journalism.

Twitter temporarily slapped other news organizations — including the BBC and PBS — with “government-funded media” labels. PBS also stopped using its Twitter account in response

An article written by Allyn late Tuesday, the NPR tech reporter detailed the messages that the billionaire owner of Twitter sent regarding NPR’s account. Musk pointed to the NPR’s choice to stop tweeting as reasoning behind possibly reassigning the account.

“Our policy is to recycle handles that are definitively dormant,” Musk wrote in one email. “Same policy applies to all accounts. No special treatment for NPR.”

According to Twitter’s online policy, the social media platform determines an account’s inactivity based on logging on — not tweeting. Twitter says that users should log in at least every 30 days to keep their accounts active, and that “accounts may be permanently removed due to prolonged inactivity.”

Musk’s comments and his actions, however, do not always match and it is uncertain if he will actually reassign NPR’s handle, regardless of Twitter’s published policy on account activity.

When asked by NPR who would be willing to use NPR’s Twitter account, Musk replied, “National Pumpkin Radio,” along with a fire emoji and a laughing emoji, NPR reported.

It is unknown if NPR has logged into its account, which currently has a blue check without the previous “government-funded media” label, since April. The Associated Press reached out to NPR for comment early Wednesday.

Musk disbanded Twitter’s media and public relations department after the takeover.

As of Wednesday, the NPR Twitter handle still appeared to belong to NPR. If Musk does reassign the account to another user, experts warn of misinformation and further loss of credibility.

“Potentially losing access to a handle as a form of pressure is really just a continuation of eroding the credibility of information sharing on Twitter,” Zeve Sanderson, executive director of New York University’s Center for Social Media and Politics told The Associated Press.

“For journalism, there’s not only brand safety concerns, but in addition to that, there are a ton of concerns around misinformation potentially being perceived as a lot more credible — because someone (could be) tweeting from from the NPR handle when it’s really not them,” Sanderson added.

It is the latest volley in what many experts describe as a chilling and uncertain landscape for journalism on Twitter since Musk acquired the company in October.

In addition to removing news organization’s verifications and temporarily adding labels like “government-funded media” on some accounts, Musk abruptly suspended the accounts of individual journalists who wrote about Twitter late last year.

In response to Musk’s Tuesday emails, Liz Woolery, digital policy lead at literary organization PEN America said that it is “hard to imagine a more potent example of Musk’s willingness to use Twitter to arbitrarily intimidate and retaliate against any person or organization that irks him, with or without provocation.”

“It’s a purely authoritarian tactic, seemingly intended to undermine one of the country’s premier and most trusted news organizations—one that is especially important to rural communities across the U.S.” Woolery added in a Wednesday statement to The Associated Press.

Continue Reading

Tech

Scientists warn of AI dangers but don’t agree on solutions

Published

on

CAMBRIDGE, Mass. (AP) — Computer scientists who helped build the foundations of today’s artificial intelligence technology are warning of its dangers, but that doesn’t mean they agree on what those dangers are or how to prevent them.

After retiring from Google so he could speak more freely, so-called Godfather of AI Geoffrey Hinton plans to outline his concerns Wednesday at a conference at the Massachusetts Institute of Technology. He’s already voiced regrets about his work and doubt about humanity’s survival if machines get smarter than people.

Fellow AI pioneer Yoshua Bengio, co-winner with Hinton of the top computer science prize, told The Associated Press on Wednesday that he’s “pretty much aligned” with Hinton’s concerns brought on by chatbots such as ChatGPT and related technology, but worries that to simply say “We’re doomed” is not going to help.

“The main difference, I would say, is he’s kind of a pessimistic person, and I’m more on the optimistic side,” said Bengio, a professor at the University of Montreal. “I do think that the dangers — the short-term ones, the long-term ones — are very serious and need to be taken seriously by not just a few researchers but governments and the population.”

There are plenty of signs that governments are listening. The White House has called in the CEOs of Google, Microsoft and ChatGPT-maker OpenAI to meet Thursday with Vice President Kamala Harris in what’s being described by officials as a frank discussion on how to mitigate both the near-term and long-term risks of their technology. European lawmakers are also accelerating negotiations to pass sweeping new AI rules.

But all the talk of the most dire future dangers has some worried that hype around superhuman machines — which don’t yet exist — is distracting from attempts to set practical safeguards on current AI products that are largely unregulated.

Margaret Mitchell, a former leader on Google’s AI ethics team, said she’s upset that Hinton didn’t speak out during his decade in a position of power at Google, especially after the 2020 ouster of prominent Black scientist Timnit Gebru, who had studied the harms of large language models before they were widely commercialized into products such as ChatGPT and Google’s Bard.

“It’s a privilege that he gets to jump from the realities of the propagation of discrimination now, the propagation of hate language, the toxicity and nonconsensual pornography of women, all of these issues that are actively harming people who are marginalized in tech,” said Mitchell, who was also forced out of Google in the aftermath of Gebru’s departure. “He’s skipping over all of those things to worry about something farther off.”

Bengio, Hinton and a third researcher, Yann LeCun, who works at Facebook parent Meta, were all awarded the Turing Prize in 2019 for their breakthroughs in the field of artificial neural networks, instrumental to the development of today’s AI applications such as ChatGPT.

Bengio, the only one of the three who didn’t take a job with a tech giant, has voiced concerns for years about near-term AI risks, including job market destabilization, automated weaponry and the dangers of biased data sets.

But those concerns have grown recently, leading Bengio to join other computer scientists and tech business leaders like Elon Musk and Apple co-founder Steve Wozniak in calling for a six-month pause on developing AI systems more powerful than OpenAI’s latest model, GPT-4.

Bengio said Wednesday he believes the latest AI language models already pass the “Turing test” named after British codebreaker and AI pioneer Alan Turing’s method introduced in 1950 to measure when AI becomes indistinguishable from a human — at least on the surface.

“That’s a milestone that can have drastic consequences if we’re not careful,” Bengio said. “My main concern is how they can be exploited for nefarious purposes to destabilize democracies, for cyber attacks, disinformation. You can have a conversation with these systems and think that you’re interacting with a human. They’re difficult to spot.”

Where researchers are less likely to agree is on how current AI language systems — which have many limitations, including a tendency to fabricate information — will actually get smarter than humans.

Aidan Gomez was one of the co-authors of the pioneering 2017 paper that introduced a so-called transformer technique — the “T” at the end of ChatGPT — for improving the performance of machine-learning systems, especially in how they learn from passages of text. Then just a 20-year-old intern at Google, Gomez remembers laying on a couch at the company’s California headquarters when his team sent out the paper around 3 a.m. when it was due.

“Aidan, this is going to be so huge,” he remembers a colleague telling him, of the work that’s since helped lead to new systems that can generate humanlike prose and imagery.

Six years later and now CEO of his own AI company, Cohere, Gomez is enthused about the potential applications of these systems but bothered by fearmongering he says is “detached from the reality” of their true capabilities and “relies on extraordinary leaps of imagination and reasoning.”

“The notion that these models are somehow gonna get access to our nuclear weapons and launch some sort of extinction-level event is not a productive discourse to have,” Gomez said. “It’s harmful to those real pragmatic policy efforts that are trying to do something good.”

Continue Reading

Trending