Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2025-12-17 10:00:00| Fast Company

Cloudflare has often been described as some version of the most important internet company youve never heard of. But for the better part of 2025, cofounder and CEO Matthew Prince has been trying to change that. The companys core business is to improve the performance and enhance the security of websites and online applications, protecting against malicious actors and routing web traffic through its data centers to optimize performance. Six billion people pass through our network every single month, Prince says. If Cloudflare is doing its job well, no one notices. But in July, Prince declared Content Independence Day, a broadside against the AI companies that, in his view, were unfairly scraping content to the detriment of the media industry. Cloudflare enabled clients that signed up for its pay per crawl service to block AI crawlers from accessing their content unless the companiesAnthropic, Google, Meta, OpenAI, etc.paid for the privilege. This was catnip to the media, Fast Company included, which immediately started paying a lot more attention to Cloudflare. I think this is the most interesting question over the next five years, Prince says. What is the future business model of the internet going to be? Prince has a personal interest in this question. He was the editor of his school newspaper at Trinity College (the Connecticut one, not Dublin) and, in 2023, he and his wife purchased the Park Record, his hometown newspaper in Park City, Utah. I appreciate the hard work of our journalistic team, whos showing up at city council meetings, covering local politics. There has to be a business model to support that work, he says. That work is critically important if were going to have a functioning society. This interview has been edited and condensed. Before Cloudflare, you cofounded Unspam Technologies, an email spam-checker service, and the open-source Project Honey Pot, which tracks and identifies spammers and malicious bots. Theres a common thread to your companies. Theyre all about preventing something bad from happening, from spam to cyberattacks to unauthorized data scraping. What would a psychiatrist say about this? I guess I have a superhero fetish or something. Youre a protector. A protector, yeah. I went to law school, and so a lot of the ideas start with: Where is there a failure in society? And if we solve that problem in some way, well be able to turn that into a business. And thats worked, really. It didnt work as well with the first spam company [Unspam], but at Cloudflare, its really driven everything that weve done. [Photo: Amber Hakim] What was your original mission for Cloudflare and how has it changed? Cloudflare started about 15 years ago, when [cofounder and COO] Michelle [Zatlyn] and I were business students. When people would ask us what our mission was, wed say, Our mission is to take advantage of this interesting market opportunity, make some money, and impress our parents. Which is, I think, if anyones being honest, kind of why almost everything starts. We knew that in order to build out the network to service large customers, we needed data and we needed ways to build the models to figure out who the good guys were, who the bad guys were, and [how] to stop them. We had the bright idea that we would offer a free version of the service. We thought startups and individual developers would be the ones who would sign up. Thats not what happened at first. What happened was that every civil society organization, every nonprofit, every humanitarian organization signed up because they had small budgets but big security problems. So one day we realized that everyone who was doing some sort of good around the internet was relying on us. I remember going to lunch with a bunch of our engineers, and one of them said, This is the first job where I feel like Im actually helping build a better internet. That resonated, and that phrase kept coming up. Finally someone said, Thats Cloudflares mission: to help build a better internet. And thats what stuck. Cloudflare experienced a significant outage in mid-November after a routine infrastructure update. You corrected that problem within a few hours, but how do you mitigate these risks moving forward? Does the rise of AI affect the risk of these kinds of incidents? Any outage is unacceptable given Cloudflares role in supporting a large portion of the Internet, and we take full responsibility. Were implementing additional safeguards to help prevent similar incidents in the future. Past outages have always led us to build new, more resilient systems. We’ll also remain transparent, as we’ve always been in these situations; we published a postmortem within about 12 hours to share what happened and what were learning. As the internet evolves, including the rise of AI, we continually assess new risks to ensure our systems remain resilient. Outages and bugs can happenthats the nature of softwarebut our customers trust is our top priority. Over the years, youve come under pressure to deny service to sites that are associated with hate speech and harassment, raising questions about Cloudflares role in content moderation. As you look ahead to the midterms and the 250th anniversary of America next year and then the national election in 2028, what concerns you most when it comes to misinformation and disinformation in the AI age? I think its funny that Im sort of known as the content moderation guy. Were 15 years old, and weve had basically three incidents [the neo-Nazi website the Daily Stormer and extremist forums 8chan and Kiwi Farms]. Essentially, 6 billion people pass through our network every single month. Thats the entire online population. Thats the scale that we have, and we have a responsibility to those people. So the question is, When you have that responsibility, what do you do? People have written about this for a long time. I actually went and dusted off a bunch of my philosophy books from college. Aristotle writes a lot about how governments build trust. Were not a government, but we operate at a scale that would be inconceivable to Aristotle, and at some level have the same challenges around that. Fundamentally, Aristotle argued that there are three things you need for trust: transparency, consistency, and account­ability. Transparency: You need to know what the rules are. Consistency: The same rules should be applied the same way all the time. And ten accountability: The people who apply the rules should be responsible to the rules themselves. In answer to your question, thereve been a couple of big AI companies that have invited me to be on their boards. Ive always said no, but I engage with them; 80% of the big AI companies are Cloudflare customers, so we have a relationship with them. I think theyre doing the right thing, and theyre going a million miles an hour. And, I mean, its so exciting. But we have to stop and think about: How do you build trust? I think Im the largest nonacademic buyer of Aristotles Politics on Amazon. Ive sent signed copies to every AI executive Ive met, saying, I know you dont have a lot of time, but take the time to read this. Lets talk about how AI is eroding the traditional information ecosystem and what Cloudflare is trying to do about it. Twenty-seven years ago, a fateful thing happened: Google launched and did two things. One, it built a better search engine. Even more importantly, it built the first business model and monetization model for the internet. It helps generate traffic, and then it provides you the tools to make that traffic profitable. That has funded the growth of the vast majority of the internet. Weve gone through some platform shifts along the way. We went to social, but social was still driven by traffic. Whats going on right nowthat I think people dont completely understandis were going through another platform shift. Its a bigger platform shift than weve ever seen before, which is that the way youre going to consume information is through AI. With a search engine, you did a search, it returned 10 blue links, and then the search wasnt over. Google was a treasure map, which generated traffic to Fast Company or whoever; behind that treasure map, you could monetize it. But we know thats not the end state because sci-fi tells us its not, and sci-fi often predicts the future pretty well. If you think about any movie that has a helpful robot in it, if you say, I would like a recipe for chocolate chip cookies, the robot doesnt come back and say, Here are 10 links, go follow em and maybe youll find a nice recipe. It says, Heres the recipe. And thats exactly what ChatGPT, Anthropic, and increasingly Google with AI Overviews are doing. And make no mistake: For 95% of users, 95% of the time, thats a better user interface. That user interface is going to win and is going to be the new platform by which we consume information. Which is quite a problem for any entitynot just the mediathat wants to be found on the internet. Right. Instead of going and generating traffic, following a treasure map, and getting to Fast Company, now youre reading a derivative thats been summarized and maybe combined with other sources, taking the Fast Company information and putting it in this new ChatGPT interface. And thats a problem because the entire internet has been built on traffic, and that traffic is going away. So no matter what, as the interface of the internet changes, the business model of the internet is going to change. You have a solution for this: the pay-per-crawl model. This business proposition theoretically enables those content providers to continue to provide that content, and be compensated for it, in a way that wont compromise this new andI agreebetter user experience. How would this work? Im optimistic because both sides need each other. There are really three things you need to be an AI company, two of which are very expensive and one of which has largely been free. The two things that are expensive are going to get cheaper and cheaper, and the thing that has been free is going to be what differentiates AI companies, which theyre going to be willing to pay more for. So, what are the three things? The first is chips, GPUs, but its silicon, right? Theres never been a time in history where a silicon shortage doesnt turn into a silicon glut. Theres a bunch of sand in the world. GPUs will increasingly become commodities, the same way that CPUs and all other silicon have. The second is talent. Five years ago, if you were getting a PhD in AI, you were kind of a laughingstock. It was thought of as this dead industry that was hot in the 70s and 80s, and then it became the place where the sort of weird computer science professor went and promised that tomorrow AI was coming. Well, it turns out they were right. They just had the time frame wrong. But now its gone from this backwater to every university spinning up a department. I dont think there will be a glut of AI researchers, but I think the days of billion-dollar salaries at Metathat wont last forever because the education markets are efficient. The last bit is content. In almost all these cases, unique content ends up being the thing that differentiates media over time. YouTube, for example, started out as a technology play. It could deliver streaming video cheaper, faster than everybody else, and thats why it won. As the rest of the industry caught up with the technology, YouTube had to differentiate. First it was discoverability with search, now its with unique content that you can only get on YouTube. I think the AI companies are going to be very, very similar, which means theyre going to need that information that only you [media companies] have. So the keyif youre a media company todayis to stop the free buffet: Only you have the review of the hot restaurant in Tuscaloosa, which is unique content thats going to be incredibly precious and incredibly essential. So step one is to say: Were not going to give every AI company our content for free. Were going to say, Youre blocked. Thats what we at Cloudflare have been helping with. And then how the market develops after that, we have some ideas, but Im not quite sure. What Im confident inand what the data so far bears outis that the more unique, the more quirky, the more local your content is, the more valuable it is to AI companies, and the more likely it is that theres going to be a healthy and sustainable marketplace that exists for you to be able to sell that content. I think that this can be pie-expanding and that we might be on the doorstep of a golden age of media. I love the optimism, and I want to believe it, for obvious reasons. To put a fine point on the mechanics of it, the publisher signs up; Cloudflare blocks the AI crawlers from accessing their content; the publisher sets the price for the AI company to access that content and get paid; and you guys get a cut. Thats pretty much how that works? We have a bunch of different theories of how this could work [over time]. It could be micropayments. Thats what youve described, where the publisher sets a price, and then whenever an agent or a crawler or scraperthose are all synonymstries to access that content, they pay a fraction of a penny or a few pennies. It could be something thats closer to a Spotify model, where maybe all the AI companies contribute to a pool and that pool gets aggregated and then [distributed]. In Spotifys case, its based on how many minutes get listened to. Exactly what the business model looks like, its going to take some time to mature. If you think about music, we ended up with Spotify, but in order to get to Spotify, we started with Napster, which was sort of anything goes, and then Steve Jobs steps onstage and launches iTunes, 99 cents a song, which was revolutionary at the time, but that wasnt the business model that eventually won. The business model that eventually won was something closer to all you can eat for $10 a month. My hunch is that were not going to get the business model right the first time around, and it may not be Cloudlare that figures it out. There are lots of people who are thinking about this problem. But no matter what, we have to start with scarcity. Weve got to close the spigot. And again, this isnt just about media. The same challenges are coming for e-commerce companies, travel companiesanyone who sells anything online. Ive been struck by how many of the people who are calling us are saying, Hey, this is a real problem for us too. Big financial institutions where theyre like, No, no, no, the AI companies are disintermediating us as well, and theyre creating a problem where our research teams arent getting compensated as much. I mean, whats the future for a Booking.com in an AI-powered world? Whats the future for anyone who in the past aggregated a bunch of supply together? What is a brand? What is it worth if its just agents that are interacting and you dont have humans that are there? What I think people dont fully appreciate is that this is a more radical transformation than it was to go to mobile. Fundamentally, were going to have to reinvent how we interact and thats going to impact everyone. Lets close by going beyond the information ecosystem. Something I struggle with is how seriously to take the existential threat of AInot to revenue models, but humanity itself. Very smart people argue very different ends of the spectrum, from the terrifying vision of Nate Soares and Eliezer Yudkowsky, whose book on the dangers of a superhuman AI is called If Anyone Builds It, Everyone Dies, to the much more sanguine outlook of folks like Yann LeCun, Metas chief AI scientist. Where do you fall on this spectrum? Im on the more optimistic side. More on Yanns side. But I will say that I feel like this is a distraction from the real problems [were facing now]. Is there going to be a Terminator moment? Weve got a lot of stuff to figure out before that. Sure, we can have cocktail party conversations about whether this is going to end the world or lead to kind of a utopia. But dont let that conversation distract from the more important, more immediate conversation, which is whos going to pay journalists going forward? [Laughs] I agree: Nothing could be more important than that.


Category: E-Commerce

 

LATEST NEWS

2025-12-17 09:00:00| Fast Company

Its rare that your esoteric, impossible-to-pronounce, decade-long research project becomes a technology so crucial to national security that the President of the United States calls it out from the White House. But thats exactly what happened to Dr. Eric Wengrowski, the CEO of Steg AI. Wengrowski spent nearly a decade of his life advancing steganography, a deeply-technical method for tracking images as they travel through the machinery of the modern Internet, as the focus of his PHD at Rutgers University. After earning his degree, Wengrowki and a team of co-founders rolled his tech into a small startup. For several years, the company grew, but mostly toiled away in relative obscurity. Then, AI image generators exploded into the publics consciousness. And for Wengrowki and Stegs team, everything blew up. Durable Marks I met Wengrowski during the pandemic, when we both volunteered to help a media industry trade group rapidly pivot its yearly in-person conference to a Zoom format. For years, I only knew Wengrowski as a cheerful, highly-intelligent floating head in my video chat window. I even interviewed him for my YouTube channel from the COVID-safe confines of our respective home offices. When I finally met him in person in San Francisco in 2023, I discovered that hes actually a towering 6 feet 3 inches tall. It was one of those iconic pandemic professional meet-cute moments people joke about, where you find that someone youve virtually known for years looks totally different in person. What wasnt different about Wengrowski in real life was his intense interest and passion for his chosen field. Steganography (pronounced STEG-an-ography, like the Steg in stegasaurus) is a technique for embedding an invisible code into the pixels of an image. Basically, a complex algorithm subtly changes selected pixels in a way thats invisible to human perception. Images look no different after being marked with a steganographic watermark than they did before. Yet, when special software looks at the marked image, the unique code embedded in its pixels comes through clearly to the softwares computerized eyes.  The presence of that code lets companies like Steg track a marked image back to its source with extremely high accuracy. Crucially, because the code is embedded directly into the images pixels, its also nearly impossible to remove.  Bad actors can easily crop out a physical watermark from an images pixels, or use a tool like Photoshop to scrub data from the images IPTC or EXIF metadata fields. In contrast, because steganographic watermarks live directly in the visual part of the image itself, they travel with the image no matter where it goes. And they survive the most common image-related funny business that nefarious people might try to use to remove them. All steganographic watermarks can survive things that amateur image thieves might try, like aggressive cropping, or even the common practice of taking a screenshot of an image in order to stealthily steal it.  But Stegs tech goes even further, Wengrowski told me in an interview. If for example you load an image watermarked by the companys tech on your computer screen, take out your phone, and photograph the physical screen, the companys watermarks will survive in the new image on your phone.  Your nefarious copy will remain traceable to the original with Stegs tech. AI Explodes Everything When Wengrowski originally developed Stegs technology, he knew it was cool. And he had a hunch that it was useful for something. But exactly what that something might involve wasnt originally clear. In the early days, Steg slowly grew by helping companies with legal compliance and image protection. Steg would embed its watermarks in copyrighted images, for example, and then trace where those images ended up. If someone stole and used a copyrighted image without permission, Stegs embedded watermarks could be used to prove the theft and could help lead to a legal settlement.  The company also worked to safeguard things like pre-release images of a new product. If a company sent top-secret images of a new phone (marked with Stegs tech) to a supplier, for example, and those photos suddenly ended up as a leak in TechCrunch, the company could trace the embedded watermark and know who to blame. That was enough for Steg to grow slowly and steadily improve its tech. Then, in 2022, everything changed.  All at once, OpenAI released its Davinci image generation model (remember the avocado chairs?), Midjourney rolled out its then world-beating image generation tech, and Google leaned into image generation within its Bard and later Gemini AI models. Almost overnight, the world was awash in AI images. And very quickly, they became so realistic that everyday people had trouble knowing what was real and what was AI generated.  This presented a huge problem for AI companies. They wanted to release their tech far and wide. But they fretted about the potential societal (and legal) consequences if their images were used for deepfakes to deceive people, or even to sway elections. And more broadly, anyone with an interest in the veracity of images suddenly had a huge problem knowing what was real and what was AI-generated.  Everything from news reporting to war crimes tribunals rely on imagery as evidence. What happens when that imagery can be quickly and cheaply spun up by an AI algorithm? Yes, AI companies can physically watermark their images (such as by adding a little Gemini star in the lower right), or embed Generated by AI markers in their images metadata. But again, removing those markers is childs play for even the least sophisticated scammers. With AI image generators storming the world, the origins and veracity of every image online was suddenly called into question. Thank You, Mr. President That led to a bizarre situation for any deeply technical person pursuing their random, highly-specific passion in relative obscurity. On October 30, 2023, Wengrowki woke to find that then-president Joe Biden had issued an executive order specifically calling out AI watermarking tech, highlighting it as a crucial factor in national security, and ordering all Federal agencies to use it. Specifically, Bidens order mandated embedding information that is dificult to remove, into outputs created by AI including into outputs such as photos, videos, audio clips, or text for the purposes of verifying the authenticity of the output or the identity or characteristics of its provenance, modifications, or conveyance. The order also specifically called for the rapid development of science-back standards and techniques forlabeling synthetic content, such as using watermarking. Biden framed this as mission criticalthe term national security appears 36 times in his executive order. Basically, Biden was mandating the use of tech like steganography, and specifically calling it out from the White House.  When that happened, Wengrowski told me, everything went crazy. Since the orderand the corresponding growth of AI imagery more broadlyStegs revenue has increased 500%. Moreover, protecting the integrity of images appears bipartisanWengrowski told me that AI watermarking has been embraced by both the Biden and Trump administrations. In an extremely tight AI job market where top researchers can command eight-figure salaries, Steg now employs five machine learning PHDs devoted to improving its technologies. Although Wengrowski couldnt share his customer list on the record, I can vouch for the fact that its wildly impressive. While keeping its legal compliance and image tracing side alive, Steg has expanded aggressively into the world of cybersecurity and AI image watermarking. For AI companies that want to ply their trade without ruining humanitys trust in visual media, Stegs tech is a lifeline.  Companies can embed a steganographic watermark directly into AI images the moment theyre generated. For the life of an image, the code travels with it, even if its reposted, edited or altered. If that image is used as a deepfake or used to manipulate or harass people, the company that created it can quickly read the embedded steganographic watermark in its pixels, definitively label it as a fake, and quickly dispel any damage the image might cause. If youve created an AI image in the last year, youve almost certainly used steganography without even knowing it. Most major AI image generation companies now use the tech. Many use Stegs.  And in a world where AI images are so good that they easily fool most detectors (and even trained forensic image analysts), many companies see steganography as the only bulwark against AIs total destruction of any truth still left in the visual world. A Wild Ride For Steg and for Wengrowski personally, its been a wild ride. Right as Biden issued his order, Wengrowski became a father, and now juggles the everyday struggles and joys of a young parent with the rigors of such things as constant travel and testifying in state legislatures. The rise of AI imagery has also revealed some counterintuitive challenges. When Steg first launched, Wengrowski told me, he expected that people would yearn for a technology that could prove whether an image was real or fake. In reality, he was surprised by how little people care. Many people are fine with seeing AI generated content, as long as its funny, informative or otherwise engaging. Whether or not its properly labeled as AI matters very little to them.  More pointedly, it matters very little to the social media platforms that disseminate the content, too. Again, though, for the companies who create that contentand who face legal and reputational risk if their tech runs awryit matters an awful lot. Wengrowski tells me that Steg is continuing to improve its tech, making its watermarks even harder to beat. The company is also entering the emerging field of poisoning. New software that Wengrowki showed me invisibly alters images in ways that trip up common deepfake algorithms. If someone tries to turn the poisoned image into a deepfake, it comes out garbled and illegible. The tech works both when images are used for training deepfake models, and when a bad actor tries to create a deepfake of a specific person. The idea is that an influencer, for example, could upload poisoned images of themselves to their social media. The images would look normal to human users. But if someone tried to deepfake the influencer, the poisoned images would thwart them.  Wengrowki told me hes especially excited to use the tech to help protect young influencers and teens in general, who are often targeted in abhorrent cyberbullying attacks involving explicit deepfakes. More broadly, though, Wengrowskis story is an inspiring one for anyone grinding away on an as-yet unproven technology, convinced of its value but unsure whether the world will ever see their work. Reflecting on Stegs success, Wengrowski acknowledged that Its probably best to start a business with a clear plan and an understanding of product/market fit. But in his words, Theres also something to be said for knowing a technology is cool, continually improving it even if you have no idea where that will lead, and just trusting that eventually it will have some value for the world.  In Stegs case, thats indeed been a winning formula.


Category: E-Commerce

 

2025-12-17 09:00:00| Fast Company

Many Americans are likely to see massive changes to their taxes in 2026, especially seniors. That’s largely due to President Donald Trumps so-called big, beautiful bill, a massive 940-page bill signed into law over the summer that includes an array of new tax write-offs but also fails to renew some previous deductions from the Biden administration. One change is a $6,000 deduction for seniors. Here’s what to know. Who qualifies for the new senior tax deduction? Trumps tax and spending law introduced a $6,000 deduction for qualifying seniors ages 65 and older, on top of the current additional standard deduction for seniors under existing law. Taxpayers must attain age 65 on or before the last day of the taxable year to be eligible. The $6,000 senior deduction (or $12,000 for a married couple where both spouses qualify) applies to an eligible individual earning up to $75,000 in modified adjusted gross income, or up to $150,000 for joint filers. It is available for both itemizing and non-itemizing taxpayers. Taxpayers must include the Social Security number of the qualifying individual(s) on the return, and file jointly if married, to claim the deduction. How does the deduction impact Social Security? The deduction is meant to offset upcoming federal taxes on Social Security payments. Older taxpayers could be taxed up to 85% based on their combined income, which is calculated based on a taxpayer’s adjusted gross income plus half of their Social Security benefits, according to CNBC. Anything else to know? According to the IRS, the deduction expires at the end of 2028, right before Trump leaves office, making this a temporary deduction effective for tax years 2025 through 2028.


Category: E-Commerce

 

Latest from this category

17.12The year of the tactical vest
17.12Every AI founder thinks they want a mega investing round. Trust me, you dont
17.12AI is starting to shop for you. Heres how Visa is making sure it doesnt scam you
17.12What every manager should know about the Queen Bee myth
17.12A faster-than-light spaceship would actually look a lot like Star Treks Enterprise
17.12How to network on vacation (and why)
17.12Contract work can be greatuntil you get trapped in it
17.12The power of silence: 3 lessons on capturing an audience from a world-renowned auctioneer
E-Commerce »

All news

17.12Why this month's inflation figure may be good news for you
17.12Starmer tells Abramovich to 'pay up now' or face court
17.12Paddy Power Betfair to pay 2m for slow response to problem gambling
17.12Chase to open new J.P. Morgan Financial Center for million-dollar customers on Michigan Avenue
17.12Wisconsin developer Cal Akin was the buyer who paid $11 million for Georgian-style Lake Forest mansion
17.12The year of the tactical vest
17.12Amazon in talks to invest $10 billion in OpenAI and supply its Trainium chips
17.12Today in Chicago History: 11,000 CTA workers walk off the job, stranding 700,000 daily riders
More »
Privacy policy . Copyright . Contact form .