Donald Trump to be allowed back on Facebook after 2-year ban
Facebook parent Meta said Wednesday it will restore former President Donald Trump ’s personal account in the coming weeks, ending a two-year suspension it imposed in the wake of the Jan. 6 insurrection.
The company said in a blog post it is adding “new guardrails” to ensure there are no “repeat offenders” who violate its rules, even if they are political candidates or world leaders.
“The public should be able to hear what their politicians are saying — the good, the bad and the ugly — so that they can make informed choices at the ballot box,” wrote Nick Clegg, Meta’s vice president of global affairs.
Clegg added that when there is a “clear risk” to real-world harm, Meta will intervene.
“In the event that Mr. Trump posts further violating content, the content will be removed and he will be suspended for between one month and two years, depending on the severity of the violation,” he wrote. Facebook suspended Trump on Jan. 7, 2021, for praising people engaged in violent acts at the Capitol a day earlier. But the company had resisted earlier calls — including from its own employees — to remove Trump’s account.
Meta said Trump’s accounts will be restored “in the coming weeks” on both Facebook and Instagram. Banned from mainstream social media, Trump has been relying on Truth Social, which he launched after being blocked from Twitter.
Facebook is not only the world’s largest social media site, but had been a crucial source of fundraising revenue for Trump’s campaigns, which spent millions of dollars on the company’s ads in 2016 and 2020. The move, which comes as Trump is ramping up his third run for the White House, will not only allow Trump to communicate directly with his 34 million followers — dramatically more than the 4.8 million who currently follow him on Truth Social — but will also allow him to resume direct fundraising. During the suspension, his supporters were able to raise money for him, but couldn’t run ads directly from him or in his voice.
Responding to the news, Trump blasted Facebook’s original decision to suspend his account as he praised Truth Social.
“FACEBOOK, which has lost Billions of Dollars in value since “deplatforming” your favorite President, me, has just announced that they are reinstating my account. Such a thing should never again happen to a sitting President, or anybody else who is not deserving of retribution!” he wrote.
Other social media companies, including Snapchat, where he remains suspended, also kicked him off their platforms following the insurrection. He was recently reinstated on Twitter after Elon Musk took over the company. He has not tweeted yet.
Civil rights groups and others on the left were quick to denounce Meta’s move. Letting Trump back on Facebook sends a signal to other figures with large online audiences that they may break the rules without lasting consequences, said Heidi Beirich, founder of the Global Project Against Hate and Extremism and a member of a group called the Real Facebook Oversight Board that has criticized the platform’s efforts.
“I am not surprised but it is a disaster,” Beirich said of Meta’s decision. “Facebook created loopholes for Trump that he went right through. He incited an insurrection on Facebook. And now he’s back.”
NAACP President Derrick Johnson blasted the decision as “a prime example of putting profits above people’s safety” and a “grave mistake.”
“It’s quite astonishing that one can spew hatred, fuel conspiracies, and incite a violent insurrection at our nation’s Capitol building, and Mark Zuckerberg still believes that is not enough to remove someone from his platforms,” he said.
But Jameel Jaffer, executive director of the Knight First Amendment Institute at Columbia University called the reinstatement “the right call — not because the former president has any right to be on the platform but because the public has an interest in hearing directly from candidates for political office.”
The ACLU also called it the right move.
“Like it or not, President Trump is one of the country’s leading political figures and the public has a strong interest in hearing his speech. Indeed, some of Trump’s most offensive social media posts ended up being critical evidence in lawsuits filed against him and his administration,” said Anthony D. Romero, executive director of the American Civil Liberties Union. “The biggest social media companies are central actors when it comes to our collective ability to speak — and hear the speech of others — online. They should err on the side of allowing a wide range of political speech, even when it offends.”
Clegg said that in light of his previous violations, Trump now faces heightened penalties for repeat offenses. Such penalties “will apply to other public figures whose accounts are reinstated from suspensions related to civil unrest under our updated protocol.”
If Trump — or anyone else — posts material that doesn’t violate Facebook’s rules but is otherwise harmful and could lead to events such as the Jan. 6 insurrection, Meta says it will not remove it but it may limit its reach. This includes praising the QAnon conspiracy theory or trying to delegitimize an upcoming election.
While Trump has insisted publicly that he has no intention of returning to Twitter, he has been discussing doing so in recent weeks, according to two people familiar with the plans who spoke on condition of anonymity to discuss private conversations.
Though it has been eclipsed culturally by newer rivals like TikTok, Facebook remains the world’s largest social media site and is an incredibly powerful political platform, particularly among older Americans, who are most likely to vote and give money to campaigns.
Throughout his tenure as president, Trump’s use of social media posed a significant challenge to major social media platforms trying to balance the public’s need to hear from their elected leaders with worries about misinformation, harassment and incitement of violence.
“In a healthier information ecosystem, the decisions of a single company would not carry such immense political significance, and we hope that new platforms will emerge to challenge the hegemony of the social media giants,” the ACLU’s Romero said.
Facebook misled parents, failed to guard kids’ privacy, regulators say
U.S. regulators say Facebook misled parents and failed to protect the privacy of children using its Messenger Kids app, including misrepresenting the access it provided to app developers to private user data.
As a result, The Federal Trade Commision on Wednesday proposed sweeping changes to a 2020 privacy order with Facebook — now called Meta — that would prohibit it from profiting from data it collects on users under 18. This would include data collected through its virtual-reality products. The FTC said the company has failed to fully comply with the 2020 order.
Meta would also be subject to other limitations, including with its use of face-recognition technology and be required to provide additional privacy protections for its users.
“Facebook has repeatedly violated its privacy promises,” said Samuel Levine, director of the FTC’s Bureau of Consumer Protection. “The company’s recklessness has put young users at risk, and Facebook needs to answer for its failures.”
Meta called the announcement a “political stunt.”
“Despite three years of continual engagement with the FTC around our agreement, they provided no opportunity to discuss this new, totally unprecedented theory. Let’s be clear about what the FTC is trying to do: usurp the authority of Congress to set industry-wide standards and instead single out one American company while allowing Chinese companies, like TikTok, to operate without constraint on American soil,” Meta said in a prepared statement. “We have spent vast resources building and implementing an industry-leading privacy program under the terms of our FTC agreement. We will vigorously fight this action and expect to prevail.”
Facebook launched Messenger Kids in 2017, pitching it as a way for children to chat with family members and friends approved by their parents. The app doesn’t give kids separate Facebook or Messenger accounts. Rather, it works as an extension of a parent’s account, and parents get controls, such as the ability to decide with whom their kids can chat.
At the time, Facebook said Messenger Kids wouldn’t show ads or collect data for marketing, though it would collect some data it said was necessary to run the service.
But child-development experts raised immediate concerns.
In early 2018, a group of 100 experts, advocates and parenting organizations contested Facebook’s claims that the app was filling a need kids had for a messaging service. The group included nonprofits, psychiatrists, pediatricians, educators and the children’s music singer Raffi Cavoukian.
“Messenger Kids is not responding to a need — it is creating one,” the letter said. “It appeals primarily to children who otherwise would not have their own social media accounts.” Another passage criticized Facebook for “targeting younger children with a new product.”
Facebook, in response to the letter, said at the time that the app “helps parents and children to chat in a safer way,” and emphasized that parents are “always in control” of their kids’ activity.
The FTC now says this has not been the case. The 2020 privacy order, which required Facebook to pay a $5 billion fine, required an independent assessor to evaluate the company’s privacy practices. The FTC said the assessor “identified several gaps and weaknesses in Facebook’s privacy program.”
The FTC also said Facebook, from late 2017 until 2019, “misrepresented that parents could control whom their children communicated with through its Messenger Kids product.”
“Despite the company’s promises that children using Messenger Kids would only be able to communicate with contacts approved by their parents, children in certain circumstances were able to communicate with unapproved contacts in group text chats and group video calls,” the FTC said.
As part of the proposed changes to the FTC’s 2020 order, Meta would also be required to pause launching new products and services without “written confirmation from the assessor that its privacy program is in full compliance” with the order.
Elon Musk threatens to reassign NPR’s Twitter account
WASHINGTON (AP) — Elon Musk threatened to reassign NPR’s Twitter account to “another company,” according to the non-profit news organization, in an ongoing spat between Musk and media groups since his $44 billion acquisition of Twitter last year.
“So is NPR going to start posting on Twitter again, or should we reassign @NPR to another company?” Musk wrote in one email late Tuesday to NPR reporter Bobby Allyn.
NPR stopped tweeting from its main account after Twitter abruptly labeled NPR’s main account as “ state-affiliated media ” last month, a term that’s also been used to identify outlets controlled or heavily influenced by authoritarian governments. Twitter then changed the label to “ government-funded media.”
NPR said that both labels were inaccurate and undermined its credibility — noting the nonprofit news company operates independently of the U.S. government. Federal funding from the Corporation for Public Broadcasting accounts for less than 1% of NPR’s annual operating budget, the company said.
The last tweets on NPR’s main account are from April 12 — when the news organization shared a thread of other places readers and listeners can find its journalism.
Twitter temporarily slapped other news organizations — including the BBC and PBS — with “government-funded media” labels. PBS also stopped using its Twitter account in response
An article written by Allyn late Tuesday, the NPR tech reporter detailed the messages that the billionaire owner of Twitter sent regarding NPR’s account. Musk pointed to the NPR’s choice to stop tweeting as reasoning behind possibly reassigning the account.
“Our policy is to recycle handles that are definitively dormant,” Musk wrote in one email. “Same policy applies to all accounts. No special treatment for NPR.”
According to Twitter’s online policy, the social media platform determines an account’s inactivity based on logging on — not tweeting. Twitter says that users should log in at least every 30 days to keep their accounts active, and that “accounts may be permanently removed due to prolonged inactivity.”
Musk’s comments and his actions, however, do not always match and it is uncertain if he will actually reassign NPR’s handle, regardless of Twitter’s published policy on account activity.
When asked by NPR who would be willing to use NPR’s Twitter account, Musk replied, “National Pumpkin Radio,” along with a fire emoji and a laughing emoji, NPR reported.
It is unknown if NPR has logged into its account, which currently has a blue check without the previous “government-funded media” label, since April. The Associated Press reached out to NPR for comment early Wednesday.
Musk disbanded Twitter’s media and public relations department after the takeover.
As of Wednesday, the NPR Twitter handle still appeared to belong to NPR. If Musk does reassign the account to another user, experts warn of misinformation and further loss of credibility.
“Potentially losing access to a handle as a form of pressure is really just a continuation of eroding the credibility of information sharing on Twitter,” Zeve Sanderson, executive director of New York University’s Center for Social Media and Politics told The Associated Press.
“For journalism, there’s not only brand safety concerns, but in addition to that, there are a ton of concerns around misinformation potentially being perceived as a lot more credible — because someone (could be) tweeting from from the NPR handle when it’s really not them,” Sanderson added.
It is the latest volley in what many experts describe as a chilling and uncertain landscape for journalism on Twitter since Musk acquired the company in October.
In addition to removing news organization’s verifications and temporarily adding labels like “government-funded media” on some accounts, Musk abruptly suspended the accounts of individual journalists who wrote about Twitter late last year.
In response to Musk’s Tuesday emails, Liz Woolery, digital policy lead at literary organization PEN America said that it is “hard to imagine a more potent example of Musk’s willingness to use Twitter to arbitrarily intimidate and retaliate against any person or organization that irks him, with or without provocation.”
“It’s a purely authoritarian tactic, seemingly intended to undermine one of the country’s premier and most trusted news organizations—one that is especially important to rural communities across the U.S.” Woolery added in a Wednesday statement to The Associated Press.
Scientists warn of AI dangers but don’t agree on solutions
CAMBRIDGE, Mass. (AP) — Computer scientists who helped build the foundations of today’s artificial intelligence technology are warning of its dangers, but that doesn’t mean they agree on what those dangers are or how to prevent them.
After retiring from Google so he could speak more freely, so-called Godfather of AI Geoffrey Hinton plans to outline his concerns Wednesday at a conference at the Massachusetts Institute of Technology. He’s already voiced regrets about his work and doubt about humanity’s survival if machines get smarter than people.
Fellow AI pioneer Yoshua Bengio, co-winner with Hinton of the top computer science prize, told The Associated Press on Wednesday that he’s “pretty much aligned” with Hinton’s concerns brought on by chatbots such as ChatGPT and related technology, but worries that to simply say “We’re doomed” is not going to help.
“The main difference, I would say, is he’s kind of a pessimistic person, and I’m more on the optimistic side,” said Bengio, a professor at the University of Montreal. “I do think that the dangers — the short-term ones, the long-term ones — are very serious and need to be taken seriously by not just a few researchers but governments and the population.”
There are plenty of signs that governments are listening. The White House has called in the CEOs of Google, Microsoft and ChatGPT-maker OpenAI to meet Thursday with Vice President Kamala Harris in what’s being described by officials as a frank discussion on how to mitigate both the near-term and long-term risks of their technology. European lawmakers are also accelerating negotiations to pass sweeping new AI rules.
But all the talk of the most dire future dangers has some worried that hype around superhuman machines — which don’t yet exist — is distracting from attempts to set practical safeguards on current AI products that are largely unregulated.
Margaret Mitchell, a former leader on Google’s AI ethics team, said she’s upset that Hinton didn’t speak out during his decade in a position of power at Google, especially after the 2020 ouster of prominent Black scientist Timnit Gebru, who had studied the harms of large language models before they were widely commercialized into products such as ChatGPT and Google’s Bard.
“It’s a privilege that he gets to jump from the realities of the propagation of discrimination now, the propagation of hate language, the toxicity and nonconsensual pornography of women, all of these issues that are actively harming people who are marginalized in tech,” said Mitchell, who was also forced out of Google in the aftermath of Gebru’s departure. “He’s skipping over all of those things to worry about something farther off.”
Bengio, Hinton and a third researcher, Yann LeCun, who works at Facebook parent Meta, were all awarded the Turing Prize in 2019 for their breakthroughs in the field of artificial neural networks, instrumental to the development of today’s AI applications such as ChatGPT.
Bengio, the only one of the three who didn’t take a job with a tech giant, has voiced concerns for years about near-term AI risks, including job market destabilization, automated weaponry and the dangers of biased data sets.
But those concerns have grown recently, leading Bengio to join other computer scientists and tech business leaders like Elon Musk and Apple co-founder Steve Wozniak in calling for a six-month pause on developing AI systems more powerful than OpenAI’s latest model, GPT-4.
Bengio said Wednesday he believes the latest AI language models already pass the “Turing test” named after British codebreaker and AI pioneer Alan Turing’s method introduced in 1950 to measure when AI becomes indistinguishable from a human — at least on the surface.
“That’s a milestone that can have drastic consequences if we’re not careful,” Bengio said. “My main concern is how they can be exploited for nefarious purposes to destabilize democracies, for cyber attacks, disinformation. You can have a conversation with these systems and think that you’re interacting with a human. They’re difficult to spot.”
Where researchers are less likely to agree is on how current AI language systems — which have many limitations, including a tendency to fabricate information — will actually get smarter than humans.
Aidan Gomez was one of the co-authors of the pioneering 2017 paper that introduced a so-called transformer technique — the “T” at the end of ChatGPT — for improving the performance of machine-learning systems, especially in how they learn from passages of text. Then just a 20-year-old intern at Google, Gomez remembers laying on a couch at the company’s California headquarters when his team sent out the paper around 3 a.m. when it was due.
“Aidan, this is going to be so huge,” he remembers a colleague telling him, of the work that’s since helped lead to new systems that can generate humanlike prose and imagery.
Six years later and now CEO of his own AI company, Cohere, Gomez is enthused about the potential applications of these systems but bothered by fearmongering he says is “detached from the reality” of their true capabilities and “relies on extraordinary leaps of imagination and reasoning.”
“The notion that these models are somehow gonna get access to our nuclear weapons and launch some sort of extinction-level event is not a productive discourse to have,” Gomez said. “It’s harmful to those real pragmatic policy efforts that are trying to do something good.”
- Florida5 days ago
NASA and SpaceX Resupply Launch to the Space Station moved to June 4, 12:12 pm
- Central Florida News2 days ago
Orlando Mayor Buddy Dyer and City Officials hosts The Grove Park Ribbon Cutting Ceremony
- North Florida News2 days ago
AIF names 2023 Champions for Business: Senators Kathleen Passidomo, Travis Hutson, Linda Stewart, State Reps. Paul Renner, Tommy Gregory and Tom Fabricio
- Politics4 days ago
Orlando Mayor Buddy Dyer to attend the United States Conference of Mayors in Ohio
- Central Florida News1 day ago
Orange County Democratic State House Candidates for HD 35, 44, 47 Ready for Rematch in 2024