|
Cory Joseph has been blind since birth. So hes among the people Apple aims to serve with an addition to its App Store called Accessibility Nutrition Labels, one of a raft of features the company announced earlier this week to mark Global Accessibility Awareness Day. Once the labels go live later this year, each apps listing will detail the accessibility features it supports, such as the VoiceOver screen reader, voice input, options to adjust text size and screen contrast, and captioned audio. These enabling technologies can be the difference between an app being essential and unusable: Having this level of transparency from the App Store is huge, says Joseph. He isnt just one of the users who will benefit from that information, though. As a principal accessibility solutions architect at CVS Health, Joseph is in the business of making sure software works for everybody. Given his employers scaleits the worlds second-largest healthcare company by revenue and reaches 100 million people a dayits a job with the potential for deep real-world impact. When using CVSs apps, everyone’s trying to find the best care, and we want to make sure that’s barrier-free for everyone, explains Joseph. The 6-year-old team hes on has been responsible for achievements that go well beyond taking advantage of the core accessibility features offered by Apple and other platform providers. Spoken Rx uses RFID technology to identify prescription meds and read the vital details about them out loud. [Photo: Courtesy of CVS] In 2020, for instance, the company introduced a CVS Pharmacy app feature called Spoken Rxa baby of mine, Joseph says. Special radio-frequency identification (RFID) labels on prescription containers enable it to read aloud vital information such as dosage instructions. CVS Health has also made some of its investments in accessibility freely available to other developers by open-sourcing them, including iOS and Android code, an automated system for testing website usability, and tools for annotating web designs in Figma. As a field, accessibility has come a long way since Apple first dedicated a team to it, initially known as the Office of Special Education. Over 40 years, the company has built a wealth of functionality into its products to facilitate their use by people with disabilities, including the technologies that make the iPhone useful even if you cant see its touchscreen interface. Some of its recent advances, such as on-device generation of custom synthetic voices, would have been unimaginable just a few years ago. This weeks announcements even include support for brain-computer interfaces. By contrast, theres nothing gee-whizzy about the Accessibility Nutrition Labels themselves. They just summarize the features that a given app has enabled. But by doing that in such a straightforward, prominent way, theyll not only aid millions of users but also give some glory to the software makers who take accessible design seriously. Rather than be embarrassed by listings that make their lack of effort obvious, developers who dont yet have much to brag about might finally get with the program. Accessibility Nutrition Labels will clearly indicate which features for inclusive design an app supports. [Photo: Apple] Joseph hopes that the labels associations are only positive. It’s easy to think about this sort of thing as a badge of shame, and I think that’s not the right way to think about this, he told me. This is an opportunity for independent developers, large organizations, and everyone in between to highlight the good work they do. Even though Joseph works for a company that has dedicated significant mindshare and money to that good work, hes up front about the obstacles to rapid progress that large companies face, even when they have all the right intentions. I would be lying if I said that there aren’t challenges, he told me. We’re a gigantic organization. There are challenges in every gigantic organization. Of course, we balance all of our work and plan everything out as best as we can, and we deliver the most successful experience that we can across our applications. The good news, he adds, is that CVS Health-size resources arent necessary to make software accessible. Realistically, it’s easier for smaller developers, he says. They can move more quickly, they can update their code faster, and they can adapt to and take in their user feedback in real time and make those changes by engaging directly. For independent and smaller developers, this shouldn’t be a burden. I find that take heartening. And if Joseph is right that app creators dont have to be humongous to get inclusive design right, Accessibility Nutrition Labels will soon prove it. Youve been reading Plugged In, Fast Companys weekly tech newsletter from me, global technology editor Harry McCracken. If a friend or colleague forwarded this edition to youor if you’re reading it on FastCompany.comyou can heck out previous issues and sign up to get it yourself every Friday morning. I love hearing from you: Ping me at hmccracken@fastcompany.com with your feedback and ideas for future newsletters. I’m also on Bluesky, Mastodon, and Threads, and you can follow Plugged In on Flipboard. More top tech stories from Fast Company Donald Trump says he’s our ‘crypto president,’ but he’s tanking its best shot at adoptionThe president’s deep involvement in the crypto industry is raising red flags in Washington, leading to the collapse of a key stablecoin bill. Read More AI is printing the rocket engine that could beat SpaceX at its own gameLeap 71 is developing AI to build rocket engines faster and cheaper than ever before. Read More Couples are saying ‘I do’ in ‘Minecraft’ as virtual weddings become more popularMore couples are tying the knot in digital worlds, saving money and celebrating love in the places they met online. Read More Apple teams up with a brain-computer startup to turn thoughts into device controlThe tech giant is working with Synchron to develop neural interfaces that let users control Apple devices with their brains. Read More Meta is building a new data center in Louisianaand this Senate committee wants to know why it’s being powered by gas (exclusive)The local utility says Meta’s AI data center requires three new natural gas plants. The Senate Environment and Public Works Committee is asking how this fits with Meta’s climate goals. Read More These 5 free AI-powered Chrome extensions make Gmail so much betterSignificantly improve your Gmail experience without breaking the bank.Read More
Category:
E-Commerce
It has been an odd few weeks for generative AI systems, with ChatGPT suddenly turning sycophantic, and Grok, xAIs chatbot, becoming obsessed with South Africa. Fast Company spoke to Steven Adler, a former research scientist for OpenAI who until November 2024 led safety-related research and programs for first-time product launches and more-speculative long-term AI systems about bothand what he thinks might have gone wrong. The interview has been edited for length and clarity. What do you make of these two incidents in recent weeksChatGPTs sudden sycophancy and Groks South Africa obsessionof AI models going haywire? The high-level thing I make of it is that AI companies are still really struggling with getting AI systems to behave how they want, and that there is a wide gap between the ways that people try to go about this todaywhether it’s to give a really precise instruction in the system prompt or feed a model training data or fine-tuning data that you think surely demonstrate the behavior you want thereand reliably getting models to do the things you want and to not do the things you want to avoid. Can they ever get to that point of certainty? I’m not sure. There are some methods that I feel optimistic aboutif companies took their time and were not under pressure to really speed through testing. One idea is this paradigm called control, as opposed to alignment. So the idea being, even if your AI wants different things than you want, or has different goals than you want, maybe you can recognize that somehow and just stop it from taking certain actions or saying or doing certain things. But that paradigm is not widely adopted at the moment, and so at the moment, I’m pretty pessimistic. Whats stopping it being adopted? Companies are competing on a bunch of dimensions, including user experience, and people want responses faster. There’s the gratifying thing of seeing the AI start to compose its response right away. Theres some real user cost of safety mitigations that go against that. Another aspect is, Ive written a piece about why it’s so important for AI companies to be really careful about the ways that their leading AI systems are used within the company. If you have engineers using the latest GPT model to write code to improve the company’s security, if a model turns out to be misaligned and wants to break out of the company or do some other thing that undermines security, it now has pretty direct access. So part of the issue today is AI companies, even though they’re using AI in all these sensitive ways, haven’t invested in actually monitoring and understanding how their own employees are using these AI systems, because it adds more friction to their researchers being able to use them for other productive uses. I guess weve seen a lower-stakes version of that with Anthropic [where a data scientist working for the company used AI to support their evidence in a court case, which included a hallucinatory reference to an academic article]. I obviously don’t know the specifics. Its surprising to me that an AI expert would submit testimony or evidence that included hallucinated court cases without having checked it. It isnt surprising to me that an AI system would hallucinate things like that. These problems are definitely far from solved, which I think points to a reason that its important to check them very carefully. You wrote a multi-thousand-word piece on ChatGPTs sycophancy and what happened. What did happen? I would separate what went wrong initially versus what I found in terms of what still is going wrong. Initially, it seems that OpenAI started using new signals for what direction to push its AI intoor broadly, when users had given the chatbot a thumbs-up, they used this data to make the chatbot behave more in that direction, and it was penalized for thumb-down. And it happens to be that some people really like flattery. In small doses, thats fine enough. But in aggregate this produced an initial chatbot that was really inclined to blow smoke. The issue with how it became deployed is that OpenAIs governance around what passes, what evaluations it runs, is not good enough. And in this case, even though they had a goal for their models to not be sycophanticthis is written in the company’s foremost documentation about how their models should behavethey did not actually have any tests for this. What I then found is that even this version that is fixed still behaves in all sorts of weird, unexpected ways. Sometimes it still has these behavioral issues. This is what’s been called sycophancy. Other times it’s now extremely contrarian. It’s gone the other way. What I make of this is its really hard to predict what an AI system is going to do. And so for me, the lesson is how important it is to do careful, thorough empirical testing. And what about the Grok incident? The type of thing I would want to understand to assess that is what sources of user feedback Grok collects, and how, if at all, those are used as part of the training process. And in particular, in the case of the South African white-genocide-type statements, are these being put forth by users and the model is agreeing with them? Or to what extent is the model blurting them out on its own, without having been touched? It seems these small changes can escalate and amplify. I think the problems today are real and important. I do think they are going to get even harder as AI starts to get used in more and more important domains. So, you know, it’s troubling. If you read the accounts of people having their delusions reinforced by this version of ChatGPT, those are real people. This can be actually quite harmful for them. And ChatGPT is widely used by a lot of people.
Category:
E-Commerce
When Nicholas Bloom, the William Eberle Professor of Economics at Stanford University in California, started studying working from home in 2004, it was hard to get anyone engaged, he says. Even in 2018, no one had any interest whatsoever. In 2025, thats hard to fathom. Between the pandemic and technological advancements, WFH has become a norm among white collar workers. Not only has it normalized; its also destigmatized. The act that used to generate memes of Homer Simpson on the couch, prodding a distant computer with a stick has gained positive connotations, says Bloom. Working from home is seen as a privilege. Its also here to stay. For their latest study, Working from Home in 2025, Bloom and his collaborators analyzed responses from 16,000 college graduates across 40 countries and discovered that WFH levels appear to have stabilized as of 2025, but its embrace hasnt been universal. WFH rates vary by location: highest in English speaking regionsthe U.S., UK, Australia, Canada, New Zealandthe rate dips a little across continental Europe, then dips a lot across Africa and Central and South Americas. WFH is least prevalent in Asia. To be clear, when Bloom says WFH, hes mostly talking about those on hybrid work schedules. Sixty percent of people work fully in-person, 30% are hybrid, and 10% are fully remote, he says of those countries where the policy has stuck. Hybrid typically means Tuesday through Thursdays in the officea schedule Blooms values at about 8% more paybecause it saves two to three hours a week of commuting [and] enables people to live further away from their offices, often to where real estate is cheaper. Companies also benefit from hybrid policies, Blooms study found, since fewer employees tend to quit. With all these advantages, youd think bosses would have embraced WFH worldwide. Why on earth does, say, Japan have a third the work from home rates of the U.S.? Bloom says. After looking at factors including development (Japan is about as developed as the U.S.), population density, industrial structure, and connectivity (no big differences there), it left Bloom and fellow researchers with one notable variable. The big factor is cultural, he says, and it’s around individualism. In conversation with Fast Company, which has been edited for length and clarity, Bloom elaborated on how individualism drives working from home, how much the pandemic really increased at-home work rates, and why people still tend to think were returning to the office even though the data says otherwise. Fast Company: What inspired you to look globally for your latest study? Nicholas Bloom: If you look at the data, there was clearly a return to office movement from summer 2020 onwards after the lockdown in the U.S. But from Spring 2023 onwards, the return to office seems to slow down. People seem surprised by that. They’re like, Isn’t the media full of stories of Zoom canceling [WFH], Amazon canceling [WFH]? Yes, there are a bunch of high-profile firms canceling or reducing work from home. Turns out there are just as many on the other side, because their leases expire. If youre Goliath National Bank and your lease expires, it’s a perfect opportunity to reduce days in the office and save a chunk of money. What we’ve seen over the last couple of years in the U.S. is like a war, and it’s been fought to a standstill. That sparked the big question for us: What on earth does this look like globally? We last collected global data in 2023, so I really didn’t know. It turns out, globally, work from home has also stalled out. There has been no change since 2023. Globally, we’re in a new norm. Folks saying when we return to the office at this point are dreaming. This is the future. One of your findings I found particularly interesting was that WFH rates are higher in individualistic societies than in collectivist ones. Can you unpack that? In individualistic societies, managers typically aren’t micromanaging their employees. The U.S. setup is: A manager tells an employee what to do and gives them strong incentives, like performance evaluations and bonuses. In Japan, theres much more micromanaging, because there’s much less hiring, firing, and bonuses. Managers want to see employees there. In Japan, you can’t leave the office until the boss has left. This long-hours culture exists for everyone. When the boss leaves, their junior leaves, then their junior leaves, etcetera. That is very problematic for work from home. If you talk to folks working for American firms in Japan, they’re typically on a hybrid setup. If they work for Japanese firms in Japan, often doing the same job, they’re required to come into the office every day. Culture seems to have a huge explanation for this difference across countries. To what extent do you think this comes down to bosses trusting their workers, or not? It is kind of trust, although in the U.S., it’s trust but verify. Bosses don’t just trust workersthey trust them, but then they monitor. Should companies without a WFH policy reconsider? The big selling point is that it’s profitable. In my paper in Nature in June 2024, we did a massive, randomized control trial at a big company called Trip.com. They’re a publicly listed company worth about $40 billion. They randomized whether you got to work from home two days a week or come in all five daysthe former if your birthday fell on an odd day, the latter if it fell on an even day. For 24 months, we tracked 1,600 employees working in finance, marketing, computer engineeringprofessionals with college degrees. There was no effect on performance. However, quit rates fell by 35% for people allowed to work from home two days a week. For Trip.com, every person that quits costs about $50,000. If someone quits, you have to advertise, re-interview, re-recruit, get them up to speed, and take managers off activity to train them. By reducing quit rates by 35% with no effect on productivity, that’s increasing business profits by like $20 million a year. That is ultimately why work from home has stuck. On the flipside, an Economist article that mentions your study cited JP Morgan CEO Jamie Dimons worry that the young generation is being damaged by increased working from home. To what extent do you agree or disagree with that statement, and why? I advise my Stanford undergrads, particularly in their first five years of work, that it’s a good idea to go into the office four days a week, because Jamie Dimon is exactly right. It is easier to mentor, learn, and build connections in person. Typically, when I poll students, that’s what they wantthey want to socialize, be mentored, and they don’t have a lot of space at home. As people get to their 30s and 40s, they’ve moved up that learning curve, but they still benefit from coming in, maybe three even two days a week. Another interesting data point from your study was the similar WFH rates for men and women across regions. What do you think accounts for that? They want to. You see a slightly higher preference for women to work from home. The main decider in the U.S. is: Do you have kid? A man with children under the age of 12 has a higher preference to work from home than a woman without kids, for example. Having a disability is also a huge driver, but gender doesn’t matter that much. What you see in countries like India is gender matters a lot more, because for women, there’s assault risk and massive sexism in the workplace. In lower income countries, the gender gap grows. What was the most surprising takeaway from your study? Working from home has stabilized globally. I did an online presentation for Australia last week, and people there are under the same view as in the U.S., that big companies were banning it. We just don’t see that in any data set. Fact and opinion are about as divergent as people’s views on crimethey always think crime is rising. On average, its tending to decline. Everyone thinks work from home is ending, but you don’t see it globally.
Category:
E-Commerce
All news |
||||||||||||||||||
|