Amazon launches a subscription prescription drug service
Amazon is adding a prescription drug discount program to its growing health care business.
The retail giant said Tuesday that it will launch RxPass, a subscription service for customers who have Prime memberships. Amazon said people will pay $5 a month to fill as many prescriptions as they need from a list of about 50 generic medications, which are generally cheaper versions of brand-name drugs.
The company said the flat fee could cover a list of medications like the antibiotic amoxicillin and the anti-inflammatory drug naproxen.
Sildenafil also made the list. It’s used to treat erectile dysfunction under the brand name Viagra and also treats a form of high blood pressure.
Amazon sells a range of generic drugs through its pharmacy service. Some already cost as liitle as $1 for a 30-day supply, so the benefit of this new program will vary by customer.
The program doesn’t use insurance, and people with government-funded Medicaid or Medicare coverage are not eligible. It will be available in 42 states and Washington, D.C. at launch.
Any program that gets low-cost generic drugs to more patients “is a good thing,” said Karen Van Nuys, an economist who studies drug pricing at the University of Southern California. But she added that she wasn’t sure how much of an impact RxPass will have.
She noted that the program is limited to Amazon Prime customers. Other options like the Mark Cuban CostPlus Drug Co. sell more generic drugs, many for under $5.
“I just don’t know that it’s expanding access to a new set of patients,” Van Nuys said.
Still, the move could help the company take up some more space in the health care market, even though it has not always been successful in its aim. Last year, the company shuttered its hybrid virtual, in-home care service called Amazon Care after it failed to get traction from employers. And Haven, a company Amazon created in collaboration with JPMorgan and Berkshire Hathaway to improve health costs, dissolved a year earlier than that.
Amazon has said its online drug store Amazon Pharmacy is a key part of its health care plan, along with primary care organization One Medical, which the online giant is seeking to acquire for $3.9 billion. The Federal Trade Commission is investigating the proposed buyout.
In November, the company also said it would begin offering “Amazon Clinic,” a messaging service that connects patients with doctors for about two dozen common conditions, such as allergies and hair loss.
Facebook misled parents, failed to guard kids’ privacy, regulators say
U.S. regulators say Facebook misled parents and failed to protect the privacy of children using its Messenger Kids app, including misrepresenting the access it provided to app developers to private user data.
As a result, The Federal Trade Commision on Wednesday proposed sweeping changes to a 2020 privacy order with Facebook — now called Meta — that would prohibit it from profiting from data it collects on users under 18. This would include data collected through its virtual-reality products. The FTC said the company has failed to fully comply with the 2020 order.
Meta would also be subject to other limitations, including with its use of face-recognition technology and be required to provide additional privacy protections for its users.
“Facebook has repeatedly violated its privacy promises,” said Samuel Levine, director of the FTC’s Bureau of Consumer Protection. “The company’s recklessness has put young users at risk, and Facebook needs to answer for its failures.”
Meta called the announcement a “political stunt.”
“Despite three years of continual engagement with the FTC around our agreement, they provided no opportunity to discuss this new, totally unprecedented theory. Let’s be clear about what the FTC is trying to do: usurp the authority of Congress to set industry-wide standards and instead single out one American company while allowing Chinese companies, like TikTok, to operate without constraint on American soil,” Meta said in a prepared statement. “We have spent vast resources building and implementing an industry-leading privacy program under the terms of our FTC agreement. We will vigorously fight this action and expect to prevail.”
Facebook launched Messenger Kids in 2017, pitching it as a way for children to chat with family members and friends approved by their parents. The app doesn’t give kids separate Facebook or Messenger accounts. Rather, it works as an extension of a parent’s account, and parents get controls, such as the ability to decide with whom their kids can chat.
At the time, Facebook said Messenger Kids wouldn’t show ads or collect data for marketing, though it would collect some data it said was necessary to run the service.
But child-development experts raised immediate concerns.
In early 2018, a group of 100 experts, advocates and parenting organizations contested Facebook’s claims that the app was filling a need kids had for a messaging service. The group included nonprofits, psychiatrists, pediatricians, educators and the children’s music singer Raffi Cavoukian.
“Messenger Kids is not responding to a need — it is creating one,” the letter said. “It appeals primarily to children who otherwise would not have their own social media accounts.” Another passage criticized Facebook for “targeting younger children with a new product.”
Facebook, in response to the letter, said at the time that the app “helps parents and children to chat in a safer way,” and emphasized that parents are “always in control” of their kids’ activity.
The FTC now says this has not been the case. The 2020 privacy order, which required Facebook to pay a $5 billion fine, required an independent assessor to evaluate the company’s privacy practices. The FTC said the assessor “identified several gaps and weaknesses in Facebook’s privacy program.”
The FTC also said Facebook, from late 2017 until 2019, “misrepresented that parents could control whom their children communicated with through its Messenger Kids product.”
“Despite the company’s promises that children using Messenger Kids would only be able to communicate with contacts approved by their parents, children in certain circumstances were able to communicate with unapproved contacts in group text chats and group video calls,” the FTC said.
As part of the proposed changes to the FTC’s 2020 order, Meta would also be required to pause launching new products and services without “written confirmation from the assessor that its privacy program is in full compliance” with the order.
Elon Musk threatens to reassign NPR’s Twitter account
WASHINGTON (AP) — Elon Musk threatened to reassign NPR’s Twitter account to “another company,” according to the non-profit news organization, in an ongoing spat between Musk and media groups since his $44 billion acquisition of Twitter last year.
“So is NPR going to start posting on Twitter again, or should we reassign @NPR to another company?” Musk wrote in one email late Tuesday to NPR reporter Bobby Allyn.
NPR stopped tweeting from its main account after Twitter abruptly labeled NPR’s main account as “ state-affiliated media ” last month, a term that’s also been used to identify outlets controlled or heavily influenced by authoritarian governments. Twitter then changed the label to “ government-funded media.”
NPR said that both labels were inaccurate and undermined its credibility — noting the nonprofit news company operates independently of the U.S. government. Federal funding from the Corporation for Public Broadcasting accounts for less than 1% of NPR’s annual operating budget, the company said.
The last tweets on NPR’s main account are from April 12 — when the news organization shared a thread of other places readers and listeners can find its journalism.
Twitter temporarily slapped other news organizations — including the BBC and PBS — with “government-funded media” labels. PBS also stopped using its Twitter account in response
An article written by Allyn late Tuesday, the NPR tech reporter detailed the messages that the billionaire owner of Twitter sent regarding NPR’s account. Musk pointed to the NPR’s choice to stop tweeting as reasoning behind possibly reassigning the account.
“Our policy is to recycle handles that are definitively dormant,” Musk wrote in one email. “Same policy applies to all accounts. No special treatment for NPR.”
According to Twitter’s online policy, the social media platform determines an account’s inactivity based on logging on — not tweeting. Twitter says that users should log in at least every 30 days to keep their accounts active, and that “accounts may be permanently removed due to prolonged inactivity.”
Musk’s comments and his actions, however, do not always match and it is uncertain if he will actually reassign NPR’s handle, regardless of Twitter’s published policy on account activity.
When asked by NPR who would be willing to use NPR’s Twitter account, Musk replied, “National Pumpkin Radio,” along with a fire emoji and a laughing emoji, NPR reported.
It is unknown if NPR has logged into its account, which currently has a blue check without the previous “government-funded media” label, since April. The Associated Press reached out to NPR for comment early Wednesday.
Musk disbanded Twitter’s media and public relations department after the takeover.
As of Wednesday, the NPR Twitter handle still appeared to belong to NPR. If Musk does reassign the account to another user, experts warn of misinformation and further loss of credibility.
“Potentially losing access to a handle as a form of pressure is really just a continuation of eroding the credibility of information sharing on Twitter,” Zeve Sanderson, executive director of New York University’s Center for Social Media and Politics told The Associated Press.
“For journalism, there’s not only brand safety concerns, but in addition to that, there are a ton of concerns around misinformation potentially being perceived as a lot more credible — because someone (could be) tweeting from from the NPR handle when it’s really not them,” Sanderson added.
It is the latest volley in what many experts describe as a chilling and uncertain landscape for journalism on Twitter since Musk acquired the company in October.
In addition to removing news organization’s verifications and temporarily adding labels like “government-funded media” on some accounts, Musk abruptly suspended the accounts of individual journalists who wrote about Twitter late last year.
In response to Musk’s Tuesday emails, Liz Woolery, digital policy lead at literary organization PEN America said that it is “hard to imagine a more potent example of Musk’s willingness to use Twitter to arbitrarily intimidate and retaliate against any person or organization that irks him, with or without provocation.”
“It’s a purely authoritarian tactic, seemingly intended to undermine one of the country’s premier and most trusted news organizations—one that is especially important to rural communities across the U.S.” Woolery added in a Wednesday statement to The Associated Press.
Scientists warn of AI dangers but don’t agree on solutions
CAMBRIDGE, Mass. (AP) — Computer scientists who helped build the foundations of today’s artificial intelligence technology are warning of its dangers, but that doesn’t mean they agree on what those dangers are or how to prevent them.
After retiring from Google so he could speak more freely, so-called Godfather of AI Geoffrey Hinton plans to outline his concerns Wednesday at a conference at the Massachusetts Institute of Technology. He’s already voiced regrets about his work and doubt about humanity’s survival if machines get smarter than people.
Fellow AI pioneer Yoshua Bengio, co-winner with Hinton of the top computer science prize, told The Associated Press on Wednesday that he’s “pretty much aligned” with Hinton’s concerns brought on by chatbots such as ChatGPT and related technology, but worries that to simply say “We’re doomed” is not going to help.
“The main difference, I would say, is he’s kind of a pessimistic person, and I’m more on the optimistic side,” said Bengio, a professor at the University of Montreal. “I do think that the dangers — the short-term ones, the long-term ones — are very serious and need to be taken seriously by not just a few researchers but governments and the population.”
There are plenty of signs that governments are listening. The White House has called in the CEOs of Google, Microsoft and ChatGPT-maker OpenAI to meet Thursday with Vice President Kamala Harris in what’s being described by officials as a frank discussion on how to mitigate both the near-term and long-term risks of their technology. European lawmakers are also accelerating negotiations to pass sweeping new AI rules.
But all the talk of the most dire future dangers has some worried that hype around superhuman machines — which don’t yet exist — is distracting from attempts to set practical safeguards on current AI products that are largely unregulated.
Margaret Mitchell, a former leader on Google’s AI ethics team, said she’s upset that Hinton didn’t speak out during his decade in a position of power at Google, especially after the 2020 ouster of prominent Black scientist Timnit Gebru, who had studied the harms of large language models before they were widely commercialized into products such as ChatGPT and Google’s Bard.
“It’s a privilege that he gets to jump from the realities of the propagation of discrimination now, the propagation of hate language, the toxicity and nonconsensual pornography of women, all of these issues that are actively harming people who are marginalized in tech,” said Mitchell, who was also forced out of Google in the aftermath of Gebru’s departure. “He’s skipping over all of those things to worry about something farther off.”
Bengio, Hinton and a third researcher, Yann LeCun, who works at Facebook parent Meta, were all awarded the Turing Prize in 2019 for their breakthroughs in the field of artificial neural networks, instrumental to the development of today’s AI applications such as ChatGPT.
Bengio, the only one of the three who didn’t take a job with a tech giant, has voiced concerns for years about near-term AI risks, including job market destabilization, automated weaponry and the dangers of biased data sets.
But those concerns have grown recently, leading Bengio to join other computer scientists and tech business leaders like Elon Musk and Apple co-founder Steve Wozniak in calling for a six-month pause on developing AI systems more powerful than OpenAI’s latest model, GPT-4.
Bengio said Wednesday he believes the latest AI language models already pass the “Turing test” named after British codebreaker and AI pioneer Alan Turing’s method introduced in 1950 to measure when AI becomes indistinguishable from a human — at least on the surface.
“That’s a milestone that can have drastic consequences if we’re not careful,” Bengio said. “My main concern is how they can be exploited for nefarious purposes to destabilize democracies, for cyber attacks, disinformation. You can have a conversation with these systems and think that you’re interacting with a human. They’re difficult to spot.”
Where researchers are less likely to agree is on how current AI language systems — which have many limitations, including a tendency to fabricate information — will actually get smarter than humans.
Aidan Gomez was one of the co-authors of the pioneering 2017 paper that introduced a so-called transformer technique — the “T” at the end of ChatGPT — for improving the performance of machine-learning systems, especially in how they learn from passages of text. Then just a 20-year-old intern at Google, Gomez remembers laying on a couch at the company’s California headquarters when his team sent out the paper around 3 a.m. when it was due.
“Aidan, this is going to be so huge,” he remembers a colleague telling him, of the work that’s since helped lead to new systems that can generate humanlike prose and imagery.
Six years later and now CEO of his own AI company, Cohere, Gomez is enthused about the potential applications of these systems but bothered by fearmongering he says is “detached from the reality” of their true capabilities and “relies on extraordinary leaps of imagination and reasoning.”
“The notion that these models are somehow gonna get access to our nuclear weapons and launch some sort of extinction-level event is not a productive discourse to have,” Gomez said. “It’s harmful to those real pragmatic policy efforts that are trying to do something good.”
- Florida5 days ago
NASA and SpaceX Resupply Launch to the Space Station moved to June 4, 12:12 pm
- Central Florida News2 days ago
Orlando Mayor Buddy Dyer and City Officials hosts The Grove Park Ribbon Cutting Ceremony
- Politics4 days ago
Orlando Mayor Buddy Dyer to attend the United States Conference of Mayors in Ohio
- North Florida News2 days ago
AIF names 2023 Champions for Business: Senators Kathleen Passidomo, Travis Hutson, Linda Stewart, State Reps. Paul Renner, Tommy Gregory and Tom Fabricio
- Central Florida News1 day ago
Orange County Democratic State House Candidates for HD 35, 44, 47 Ready for Rematch in 2024