|
While executives debate AI strategy in boardrooms, the real disruption is already happening on the frontlines. From automated scheduling to AI-assisted diagnostics to customer service chatbots, frontline workers are increasingly interacting with intelligent systems. Yet too many organizations still treat AI as a corporate workplace issue, overlooking the people who are most exposed to its impact. Thats a mistake. If companies want to ensure their operations stay competitive, they need to remain committed to investing in the people who are closest to the work. The frontline is the proving ground. If your AI strategy fails there, it fails everywhere. According to a recent IBM report, 40% of workers will need to reskill in the next three years due to AI and automation. Yet many companies still deprioritize frontline education. Thats not just shortsighted, its expensive. Turnover, disengagement, and operational inefficiencies all spike when workers arent equipped to adapt. Some companies are getting it right. Carters, CVS Health, McDonalds, and Papa Johns have all invested in education benefits that make learning accessible to hourly and frontline employees. These programs not only offer tuition assistance but provide career pathways, coaching, and short-form credentials that align with real business needs. McDonalds is proving that frontline education isnt merely a perk but a strategic imperative. Through its Archways to Opportunity program, McDonalds and its participating franchisees offer restaurant employees access to high school completion, college tuition assistance, English language courses, and career coaching. The results are clear: over 90,000 crew members have participated, with more than $240 million invested in tuition assistance. According to a recent survey of Archways participants, 75% say the program helped them pursue a career in a new field or industry, 79% report learning job and life skills they still use today, and 88% gained greater confidence in their abilities. Additionally, nearly two-thirds say Archways helped them earn more or get a raise, and 55% say it helped them get promoted faster. As AI reshapes frontline roles, McDonalds is leaning into the human skills that matter mostcommunication, teamwork, resilienceand equipping its workforce to thrive in a tech-enabled future. If youre a CHRO or CEO wondering where to begin, here are three immediate actions that can drive impact: Stop Gatekeeping Education: Too often, learning opportunities are reserved for salaried or corporate employees, leaving out the very people who keep operations runningfrontline, hourly, and part-time workers. Making education accessible means removing upfront costs, offering flexible formats that fit around shift work, and ensuring that programs deliver a clear return on investment for the learner. When companies like Carters expanded access to education benefits, they didnt just improve participationthey built stronger pipelines for internal mobility and retention. Start Laying the Groundwork for AI Readiness: If your organization is investing in automation, it must also invest in supporting workforce readiness and the long-term success of the people who will be impacted by it. That doesnt always mean launching AI-specific training on day one, but it does mean creating pathways for frontline employees to develop core technology skills and competencies, and gain future-ready credentials. CVS Health, for example, offers no-cost access to over 80 degree and credential programs through its tuition assistance program, which includes access to AI-specific trainings. The infrastructure is in place for employees to pursue relevant skills as business needs evolve. The key is to ensure HR, L&D, and IT are aligned so that when AI adoption accelerates, your workforce is prepared, not starting from zero. Tell Better Stories: Highlighting the real employees who are investing in their development and growth through education programs isnt just good PR, but a powerful internal engagement strategy. When employees see their peers advancing, it makes learning feel achievable and shows that growth is possible for everyone. These stories should be shared widely, with clear pathways to opportunities for advancement, wage increases, or new roles. Papa Johns has done this well through its Dough & Degrees program, turning learners into ambassadors and reinforcing the message that growth is possible at every level of the organization. AI isnt going to replace your workforcebut it will reveal whether youve invested in them. It will expose the gaps between the companies that talk about transformation and the ones that actually prepare their people for it. The winners in this next era wont be the ones with the most sophisticated algorithms or the biggest tech budgets. Theyll be the ones who saw AI not as a shortcut, but as a signal call to double down on human potential.
Category:
E-Commerce
Nearly 30 years ago, when Google launched the search engine that started its long march to dominance, its founders started without much hardware. Known at first as Backrub and operated on the Stanford campus, the companys first experimental server packed 40 gigabytes of data and was housed in a case made of Duplo blocks, the oversize version of Lego. Later, thanks to donations from IBM and Intel, the founders upgraded to a small server rack. In 2025, you cant even fit Google search in a single data center, something thats been true for a long time. Still, with a little clever resourcing and a lot of work, you can get pretty close to a modern Google-esque experience using a machine roughly the size of that original Google server. You can even house it in your laundry room. Thats where Ryan Pearce decided to put his new search engine, the robust Searcha Page, which has a privacy-focused variant called Seek Ninja. If you go to these web pages, youre hitting a server next to Pearces washer and dryer. Not that you could tell from the search results. Right now, in the laundry room, I have more storage than Google in 2000 had, Pearce says. And thats just insane to think about. Pearces DIY search engine largely eschews the cloud. The top machine leverages old server parts as well as a makeshift vent to push away the heat those parts produce. The bottom computer provides a little extra support to the setup. [Photo: courtesy of Ryan Pearce] Why the laundry room? Two reasons: Heat and noise. Pearces server was initially in his bedroom, but the machine was so hot, it actually made it too uncomfortable to sleep. He has a separate bedroom from his wife because of sleep issues, but her prodding made him realize a relocation was necessary. So he moved it to the utility room, drilled in a route for a network cable to get through, and now, between clothes cycles, its where his search engines live. The heat hasnt been absolutely terrible, but if the door is closed for too long, it is a problem, he says. Other than a little slowdown in the search results (which, to Pearces credit, has improved dramatically over the past few weeks), youd be hard-pressed to see where the gaps in his search engine lie. The results are often of higher quality than you might expect. Thats because Searcha Page and Seek Ninja are built around a massive database thats 2 billion entries strong. Im expecting to probably be at 4 billion documents within a half year, he says. By comparison, the original Google, while still hosted at Stanford, had 24 million pages in its database in 1998, and 400 billion as of 2020a fact revealed in 2023, during the United States v. Google LLC antitrust trial. By current Google standards, 2 billion pages are a drop in the bucket. But its a pretty big bucket. The not-so-secret ingredient: AI The scale that Pearce is working at is wild, especially given that hes running it on what is essentially discarded server hardware. The secret to making it all happen? Large language models. What Im doing is actually very traditional search, Pearce says. Its what Google did probably 20 years ago, except the only tweak is that I do use AI to do keyword expansion and assist with the context understanding, which is the tough thing. Pearces search engines emphasize a minimalist lookand a desire for honest user feedback. If youre trying to avoid AI in your search, you might think, Hey, wait, is this actually what I want? But its worth keeping in mind that AI has often been a key part of our search DNA. Tools such as reverse image search, for example, couldnt work without it. Long before we learned about glue on pizza, Google had been working to implement AI-driven context in more subtle ways, adding RankBrain to the mix about a decade ago. And in 2019, Microsoft executives told a search marketing conference that 90% of Bings search results came from machine learningyears before the search engine gained a chat window. In many ways, the frustration many users have with LLMs may oversimplify the truth about AIs role in search. It was already deeply embedded in modern search engines well before Google and Microsoft began to put it in the foreground. And what were now learning is that AI is a great way to build and scale a search engine, even if youre an army of one. Scaling on the cheap In many ways, Pearce is leaning into an idea that has picked up popular relevance in recent years: self-hosting. Many self-hosters might use a mini PC or a Raspberry Pi. But when youre trying to build your own Google, youre going to need a little more power than can fit in a tiny box. Always curious about what it would be like to build a search engine himself, Pearce decided to actually do it recently, buying up a bunch of old server gear powerful enough to manage hundreds of concurrent sessions. Its more powerful than some of Googles early server setups. Miniaturization has just made it so achievable, he says. Enabling this is a concept I like to call upgrade arbitrage, where extremely powerful old macines (particularly those targeting the workstation or server market) end up falling in price so significantly that it makes the gear attractive to bargain hunters. Many IT departments work around traditional upgrade cycles, usually around three years, meaning theres a lot of old gear on the market. If buyers are willing to accept the added energy costs that come with the older gear, savvy gadget shoppers can get a lot of power for not a lot of up-front money. The beefy CPU running this setup, a 32-core AMD EPYC 7532, underlines just how fast technology moves. At the time of its release in 2020, the processor alone would have cost more than $3,000. It can now be had on eBay for less than $200and Pearce bought a quality control test version of the chip to further save money. I could have gotten another chip for the same price, which would have had twice as many threads, but it would have produced too much heat, he says. Wilson Lins cloud-based search engine, which uses a vector database, includes short summaries of every post produced by LLMs, which vary in length. What he built isnt cheapthe system, all in, cost $5,000, with about $3,000 of that going toward storagebut its orders of magnitudes less expensive than the hardware would have cost new. (Half a terabyte of RAM isnt cheap, after all.) While there are certain off-site things that Pearce needs to lean on, the actual search engine itself is pulled in from this box. Its bigger than a bread box, but a lot smaller than the cloud. This is not how many developers approach complex software projects like this nowadays. Fellow ambitious hobbyist Wilson Lin, who on his personal blog recently described his efforts to create a search engine of his own, took the opposite approach from Pearce. He developed his own data parsing technologies to shrink the cost of running a search engine to pennies on the dollar compared to competing engines, leaning on at least nine separate cloud technologies. Its a lot cheaper than [Amazon Web Services]a significant amount, Lin says. And it gives me enough capacity to get somewhere with this project on a reasonable budget. Why are these developers able to get so close to what Google is building on relatively tight budgets and minimal hardware builds? Ironically, you can credit the technology many users blame for Googles declining search qualityLLMs. Catching up via LLMs One of the biggest points of controversy around search engines is the overemphasis on artificial intelligence. Usually the result shows up in a front-facing way, by trying to explain your searches to you. Some people like the time savings. Some dont. (Given that I built a popular hack for working around Googles AI summaries, it might not surprise you to learn that I lean in the latter category.) But when youre attempting to build a dataset without a ton of outside resources, LLMs have proven an essential tool for reaching scale from a development and contextualization standpoint. Pearce, who has a background in both enterprise software and game development, has not shied away from the programming opportunity that LLMs offer. Whats interesting about his model is that hes essentially building the many parts that build up a traditional search engine, piecemeal. He estimates his codebase has around 150,000 lines of code at this juncture. And a lot of that is going back and reiterating, he says. If you really consider it, its probably like Ive iterated over like 500,000 lines of code. Much of his iteration comes in the form of taking features initially managed by LLMs and writing them to work more traditionally. Thats created a design approach that allows him to build complex systems relatively quickly, and then iterate on whats working. I think its definitely lowered the barrier, Lin says of the LLMs role in enabling DIY search engines. To me, it seems like the only barrier to actually competing with Google, creating an alternate search engine, is not so much the technology, its mostly the market forces. Seek Ninja, the more private of Pearces two search engines, does not save your profile or use your location, making it a great incognito-mode option. The complexity of LLMs is such that it is one of the few things Pearce cant implement on-site in his laundry room setup. Searcha Page and Seek Ninja instead use a service called SambaNova, which provides speedy access to the Llama 3 model at a low cost. Annie Shea Weckesser, SambaNovas CMO, notes that access to low-cost models is increasingly becoming essential for solo developers like Pearce, adding that the company is giving developers the tools to run powerful AI models quickly and affordably, whether theyre working from a home setup or running in production. Pearce has other advantages that Sergey Brin and Larry Page didnt have three decades ago when they founded Google, including access to the Common Crawl repository. That open repository of web data, an important (if controversial) enabler of generative AI, has made it easier for him to build his own crawler. Pearce says he was actually blocked from Common Crawl at one point as he built his moonshot. I really appreciate them. I wish I could give them back something, but maybe when Im bigger, he says. Its a really cool organization, and I want to be less dependent on them. Small scale, big ambitions There are places where Pearce has had to scale back his ambitions somewhat. For example, he initially thought hed build his search engine using a vector database, which relies on algorithms to connect closely related items. But that completely bombed, he says. It was probably a lack of skill on my part. It id search, but . . . the results were very artistic, lets say, hinting at the fuzziness and hallucination that LLMs are known for. Vector search, while complex, is certainly possible; thats what Lins search engine uses, in the form of a self-created tool called CoreNN. That presents results differently from Pearces search engine, which works more like Google. Rather than using the meta descriptions most web pages have, it uses an LLM to briefly summarize the page itself and how it relates to the users search term. Once I actually started, I realized this is really deep, Lin says of his project. Its not a single system, or youre just focused on like a single part of programming. Its like a lot of different areas, from machine learning and natural language processing, to how do you build an app that is smooth and low latency? Pearces Searcha Page is surprisingly adept at local searches, and can help find nearby food options quickly, based on your location. And then theres the concept of doing a small-site search, along the lines of the noncommercial search engine Marginalia, which favors small sites over Big Tech. That was actually Pearces original idea, one that he hopes to get back to once he nails down the slightly broader approach hes taken. But there are already ideas emerging that werent even on Pearces radar. Someone from China actually reached out to me because . . . I think he wanted an uncensored search engine that he wanted to feed through his LLM, like his agents search, he says. Its not realistic at this time for Pearce to expand beyond Englishbesides additional costs, it would essentially require him to build brand-new datasets. But such interest hints at the sheer power of his idea, which, based on its location, he can literally hear. He does see a point where he moves the search engine outside his homehes a cloud-skeptic, so it would likely be to a colocation facility or similar type of data center. (Helping to pay for that future, he has started to dabble in some modest affiliate-style advertising, which tends to be less invasive than traditional banner ads.) My plan is if I get past a certain traffic amount, I am going to get hosted, Pearce says. Its not going to be in that laundry room forever.
Category:
E-Commerce
In 1999, a new kettle changed everything. It was blue, bold, and sold at Targetnot in a design boutique, not in a museum shop, and not in a luxury department store. This wasnt just a new product launch, though. It was the beginning of a seismic shift in how design, retail, and brand partnerships operate. This was the first time a high-end design firm joined hands with a mass market retailer. Suddenly, Design for All wasnt our tagline; it was a new strategic ethos. Today, nearly every major retailer has experimented with design collaborations. But despite the proliferation of partnerships, only a select few have truly moved the needle. Why? Because great collaborations dont start with a product. They start with a shared philosophy and deep strategy. The blueprint: What we learned at Target Michael Graves Designs groundbreaking partnership with Target wasnt successful just because the products were beautiful and affordable. It worked because both organizations came to the table fully committed with complementary strengths: Target had retail scale and marketing mastery; MGD brought world class design and user empathy. Together, we created something neither could have done alone. This wasnt about “us and them;” it was about we and together. We were both going to Design for All. MGD wasnt a vendor. Our designers became extensions of Target, embedded within their merchandising, sourcing, product development, and marketing workflows. That cross-functional integration was radical at the time. Today, its more widely understood as essential. Collaboration is a structure, not a slogan At Michael Graves Design, weve taken what we learned and turned it into a method that we use with all partners. Our Direct-to-Retail Partnership Guide outlines a rigorous, multi-phase approach, from early benchmarking and ethnographic research to final prototype approvals and packaging integration. Often, MGDs role is to demonstrate to siloed organizations the importance of cross-functional collaboration throughout the entire product development process. This approach leads to enhanced efficiency, broad cultural buy-in, and authentic innovation. Five key elements of our process include: Shared discovery: We start by collaborating with our partners merchant teams to identify product categories and individual items ripe for innovation, using methods like in-store interviews and ethnographic research to recognize product opportunity gaps. We involve marketing teams in the consumer research to ensure that the consumers voice makes its way into marketing messaging. Integrated ideation: Retail merchant teams weigh in on the hero products and feature sets for each category, becoming the primary focus for our ideation and presentation. To make selections among alternative design directions, we use mood boards, sketching, 3D modeling, 3D printing, and renderings. Collaborative vendor engagement: All product designs change during the design-for-manufacture (DFM) phase, driven by manufacturing optimization methods. Design deliverables include detailed 2D and 3D documentation, specification packets, and brand books to eliminate ambiguity; but once the factories get involved, designs always evolve. As the design partner, we ensure design intent and innovation is maintained. Ideally, we loop in vendors and factories early, to partner in feasibility and optimization, and account for factory capabilities, tariff efficiency, vendor matrix, and other factors. With early engagement, factories become true partners, open to experimentation and spurring innovation. End-to-end packaging and messaging: From dielines to social media standards, we ensure the design voice carries through every marketing touchpoint, the true genius of so many national retailers. This process works because its designed around learning, trust, cocreation, and an established division of responsibility, emphasizing each partys truest strengths. Retail today: The stakes are higher than ever Retail today is hypercompetitive. Consumers want products featuring original design and functional enhancements reflecting their own values. This is a high standard, which means standing out is harder and more important now. Legacy retailers are not just competing with one another; theyre competing with direct-to-consumer brands, Amazon, and global marketplaces that redefine convenience and choice. This is why design collaborations remain essential for retailers. When done right, exclusive design collections carry meaning beyond price. They create items that shoppers cant find anywhere else, brand differentiation, drive repeat foot traffic, and foster deep emotional connections with consumers. Thats where a direct-to-retail design collaboration becomes a powerful strategic asset. It delivers the prestige of a national brand with the economic structure of a private label. Design brands bring national brand cache, elevated aesthetics, and the cultural relevance of good design, while enabling factory direct sourcing that supports retailer margin goals. It allows retailers to tell the marketing story with gusto. This hybrid model provides retailers with tools to build customer experiences that are both inspiring and financially sound. What most collaborations get wrong Too many collaborations fail because theyre either surface-level PR plays or hierarchical vendor relationships dressed up as partnerships. A true design partnership means sharing a vision and committing to it, listening to each others expertise, and building something together across all departments for both partners. It also means that a design partner is not a vendor in the traditional sense. They are an extension of the merchant and product development team. It is a modern model where external creativity and strategic insight enhance efficiency and relevance. That requires a new mindset, especially for legacy retailers accustomed to more transactional models. Design for All, still As consumer expectations evolve and the retail landscape transforms again, the question isnt whether to collaborate. Its how. And the answer, we believe, still lies in the art of the direct-to-retail design partnership, because when design and retail truly partner, something remarkable happens. Ben Wintner is CEO of Michael Graves Design.
Category:
E-Commerce
All news |
||||||||||||||||||
|