|
|||||
Fewer Americans are signing up for Affordable Care Act health insurance plans this year, new federal data shows, as expiring subsidies and other factors push health expenses too high for many to manage. Nationally, around 800,000 fewer people have selected plans compared to a similar time last year, marking a 3.5% drop in total enrollment so far. That includes a decrease in both new consumers signing up for ACA plans and existing enrollees re-upping them. The new data released Monday evening by the Centers for Medicare and Medicaid Services is only a snapshot of a continuously changing pool of enrollees. It includes sign-ups through Jan. 3 in states that use Healthcare.gov for ACA plans and through Dec. 27 for states that have their own ACA marketplaces. In most states, the period for shopping for plans continues through Jan. 15 for plans that start in February. But even though its early, the data builds on fears that expiring enhanced tax credits could cause a dip in enrollment and force many Americans to make tough decisions to delay buying health insurance, look for alternatives, or forgo it entirely. Experts warn that the number of people who have signed up for plans may still drop even further, as enrollees get their first bill in January and some choose to cancel. Health care costs at the center of a fight in Congress The declining enrollment comes as Congress has been locked in a partisan battle over what to do about the subsidies that expired at the start of the new year. For months, Democrats have fought for a straight extension of the tax credits, while Republicans have insisted larger reforms are a better way to root out fraud and abuse and keep costs down overall. Last week, in a remarkable rebuke of Republican leadership, the House passed legislation to extend the subsidies for three years. The bill now sits in the Senate, where pressure is building for a bipartisan compromise. Up until this year, President Barack Obama‘s landmark health insurance program had been an increasingly popular option for Americans who don’t get health coverage through their jobs, including small business owners, gig workers, farmers, ranchers, and others. For the 2021 plan year, about 12 million people selected an Affordable Care Act plan. Enhanced tax credits were introduced the following year, and four years later, enrollment had doubled to over 24 million. This years sinking sign-ups sitting at about 22.8 million so far mark the first time in the past four years that enrollment has been down from the previous year at this point in the shopping window. The loss of enhanced subsidies means annual premium costs will more than double for the average ACA enrollee who had them, according to the health care research nonprofit KFF. But extending the subsidies would also be expensive for the country. Ahead of last week’s House vote, the nonpartisan Congressional Budget Office estimated that extending the subsidies for three years would increase the nation’s deficit by about $80.6 billion over the decade. Americans begin looking for other options Robert Kaestner, a health economist at the University of Chicago, said some of those who abandon ACA plans may have other options, such as going on a partner’s employer health plan or changing their income to qualify for Medicaid. Others will go without insurance at least temporarily while they look for alternatives. My prediction is 2 million more people will lack health insurance for a while,” Kaestner said. “That’s a serious issue, but Republicans would argue we’re using government money more efficiently, we’re targeting people who really need it and we’re saving $35 billion a year. Several Americans interviewed by The Associated Press have said they’re dropping coverage altogether for 2026 and will pay out of pocket for needed appointments. Many said they are crossing their fingers that they aren’t affected by a costly injury or diagnosis. I’m pretty much going to be going without health insurance unless they do something, said 52-year-old Felicia Persaud, a Florida entrepreneur who dropped coverage when she saw her monthly ACA costs were set to increase by about $200 per month. It’s sort of like playing poker and hoping the chips fall and try the best that you can.” Ali Swenson and Nicky Forster, Associated Press
Category:
E-Commerce
As concerns grow over Grok’s ability to generate sexually explicit content without the subject’s consent, a number of countries are blocking access to Elon Musk’s artificial intelligence chatbot. At the center of the controversy is a feature called Grok Imagine, which lets users create AI-generated images and videos. That tool also features a “spicy mode,” which lets users generate adult content. Both Indonesia and Malaysia ordered that restrictions be put in place over the weekend. Malaysian officials blocked access to Grok on Sunday, citing repeated misuse to generate obscene, sexually explicit, indecent, grossly offensive, and non-consensual manipulated images. Officials also cited “repeated failures by X Corp.” to prevent such content. Indonesia had blocked the chatbot the previous day for similar reasons. In a statement accompanying Groks suspension, Meutya Hafid, Indonesias Minister of Communication and Digital, said. “The government views the practice of non-consensual sexual deepfakes as a serious violation of human rights, dignity, and the security of citizens in the digital space.” The responses could be just the beginning of Grok’s problems, though. Several other countries, including the U.K., India, and France, are thinking of following suit. The U.K. has launched an investigation into the chatbot’s explicit content, which could result in it being blocked in that country as well. “Reports of Grok being used to create and share illegal, non-consensual, intimate images and child sexual abuse material on X have been deeply concerning,” Ofcom, the countrys regulator for the communications services, said in a statement. Musk, in a social media post following word of the Ofcom investigation, wrote that the U.K. government “just want[s] to suppress free speech.” Fast Company attempted to contact xAI for comment about the actions in Indonesia and Malaysia as well as similar possible blocks in other countries. An automatic reply from the company read Legacy Media Lies. Beyond the U.K., officials in the European Union, Brazil, and India have called for probes into Grok’s deepfakes, which could ultimately result in bans as well. (The U.S. government, which has contracts with xAI, has been fairly silent on the matter so far.) In a press conference last week, European Commission spokesperson Thomas Regnier said the commission was “very seriously looking into this matter,” adding This is not spicy.’ This is illegal. This is appalling. This is disgusting. This is how we see it, and this has no place in Europe. Musk and X are still feeling the effects of a $130 million fine the EU slapped on the company last month for violating the Digital Services Act, specifically over deceptive paid verification and a lack of transparency in the company’s advertising repository. Beyond sexualized images of adults, a report from the nonprofit group AI Forensics that analyzed 20,000 Grok-generated images created between Dec. 25 and Jan. 1 found that 2% depicted a person who appeared to be 18 or younger. These included 30 images of young or very young women or girls in bikinis or transparent clothes. The analysis also found Nazi and ISIS propaganda material generated by Grok. While the company has not addressed the countries blocking access to its services, it did comment on the use of its tool to create sexual content featuring minors. “We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” X Safety wrote in a post. “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.” The company has also announced it will limit image generation and editing features to paying subscribers. That, however, likely won’t be enough to satisfy government officials who want to block access to Grok while these images can still be generated.
Category:
E-Commerce
Advancements in artificial intelligence are shaping nearly every facet of society, including education. Over the past few years, especially with the availability of large language models like ChatGPT, theres been an explosion of AI-powered edtech. Some of these tools are truly helping students, while many are not. For educational leaders seeking to leverage the best of AI while mitigating its harms, its a lot to navigate. Thats why the organization I lead, the Advanced Education Research and Development Fund, collaborated with the Alliance for Learning Innovation (ALI) and Education First to write Proof Before Hype: Using R&D for Coherent AI in K-12 Education. I sat down with my coauthors, Melissa Moritz, an ALI senior advisor, and Ila Deshmukh Towery, an Education First partner, to discuss how schools can adopt innovative, responsible, and effective AI tools. Q: Melissa, what concerns you about the current wave of AI edtech tools, and what would you change to ensure these tools benefit students? Melissa: Too often, AI-powered edtech is developed without grounding in research or educators input. This leads to tools that may seem innovative, but solve the wrong problems, lack evidence of effectiveness, ignore workflow realities, or exacerbate inequities. What we need is a fundamental shift in education research and development so that educators are included in defining problems and developing classroom solutions from the start. Deep collaboration across educators, researchers, and product developers is critical. Lets create infrastructure and incentives that make it easier for them to work together toward shared goals. AI tool development must also prioritize learning science and evidence. Practitioners, researchers, and developers must continuously learn and iterate to give students the most effective tools for their needs and contexts. Q: Ila, what is the AI x Coherence Academy and what did Education First learn about AI adoption from the K-12 leaders who participated in it? Ila: The AI x Coherence Academy helps cross-functional school district teams do the work that makes AI useful: Define the problem, align with instructional goals, and then choose (or adapt) tools that fit system priorities. It’s a multi-district initiative that helps school systems integrate AI in ways that strengthen, rather than disrupt, core instructional priorities so that adoption isnt a series of disconnected pilots. We’re learning three things through this work. First, coherence beats novelty. Districts prefer customizable AI solutions that integrate with their existing tech infrastructure rather than one-off products. Second, use cases come before tools. A clear use case that articulates a problem and names and tracks outcomes quickly filters out the noise. Third, trust is a prerequisite. In a world increasingly skeptical of tech in schools, buy-in is more likely when educators, students, and community members help define the problem and shape how the technology helps solve it. Leaders are telling us they want tools that reinforce the teaching and learning goals already underway, have clear use cases, and offer feedback loops for continuous improvement. Q: Melissa and Ila, what types of guardrails need to be in place for the responsible and effective integration of AI in classrooms? Ila: For AI to be a force for good in education, we need several guardrails. Lets start with coherence and equity. For coherence, AI adoption must explicitly align with systemwide teaching and learning goals, data systems, and workflows. To minimize bias and accessibility issues, product developers should publish bias and accessibility checks, and school systems should track relevant data, such as whether tools support (versus disrupt) learning and development, and the tools efficacy and impact on academic achievement. These guardrails need to be co-designed with educators and families, not imposed by technologists or policymakers. The districts making real progress through our AI x Coherence Academy are not AI-maximalists. They are disciplined about how new tools connect to educational goals in partnership with the people they hope will use them. In a low-trust environment, co-designed guardrails and definitions are the ones that will actually hold. Melissa: We also need guardrails around safety, privacy, and evidence. School systems should promote safety and protect student data by giving families information about the AI tools being used and giving them clear opt-out paths. As for product developers, building on Ilas points, they need to be transparent about how their products leverage AI. Developers also have a responsibility to provide clear guidance around how their product should and shouldnt be used, as well as to disclose evidence of the tools efficacy. And of course, state and district leaders and regulators should hold edtech providers accountable. Q: Melissa and Ila, what gives you hope as we enter this rapidly changing AI age? Melissa: Increasingly, we are starting to have the right conversations about AI and education. More leaders and funders are calling for evidence, and for a paradigm shift in how we think about teaching and learning in the AI age. Through my work at ALI, Im hearing from federal policymakers, as well as state and district leaders, that there is a genuine desire for evidence-based AI tools that meet students and teachers needs. Im hopeful that together, well navigate this new landscape with a focus on AI innovations that are both responsible and effective. Ila: What gives me hope is that district leaders are getting smarter about AI adoption. They’re recognizing that adding more tools isn’t the answercoherence is. The districts making real progress aren’t the ones with the most AI pilots; they’re the ones who are disciplined about how new tools connect to their existing goals, systems, and relationships. They’re asking: Does this reinforce what we’re already trying to do well, or does it pull us in a new direction? And theyre bringing a range of voices into defining use cases and testing solutions to center, rather than erode, trust. That kind of strategic clarity is what we need right now. When AI adoption is coherent rather than chaotic, it can strengthen teaching and learning rather than fragment it. Auditi Chakravarty is CEO of the Advanced Education Research and Development Fund.
Category:
E-Commerce
All news |
||||||||||||||||||
|
||||||||||||||||||