|
The sound of crickets isnt always a sign of a peaceful night; sometimes, its the deafening silence of unasked questions in a virtual meeting, or an email left unread in an overflowing inbox. Especially as hybrid and remote work become the norm, communication silos are quietly eroding company culture, stalling execution, and capping growth. A 2024 report reveals that miscommunication costs companies with 100 employees an average of $420,000 per year. This is the why arent we working moment. Ive spent years observing how companies thrive or falter, and its clear that communication isnt a soft skill, but a strategic system. The next generation of high-performing executives will stand out by communicating clearly, consistently, and across every level of the organization. Here are five strategies to transform your communication and scale your company culture: 1. TREAT COMMUNICATION AS A TWO-WAY SYSTEM Many leaders view communication as a one-way street: I have the idea, we have the plan, now we just have to cascade it down. However, this top-down approach misses a crucial opportunity, especially in larger organizations where people can easily get bombarded with information. When messages are constantly flowing downward, it becomes difficult for employees to discern whats a priority to read, leading to important information getting lost. Instead, you should rethink communication as a two-way system. This means creating space for questions and input from your team regarding the information being shared. For instance, rather than just sending out a weekly division email with mandatory and optional reads, actively solicit feedback or hold quick discussions in weekly team meetings to ensure key information is understood and to create a dialogue around it. This shift from a purely distributive model to an interactive one ensures that your communication is processed, understood, and acted upon. 2. CHALLENGE THE TOP-DOWN MINDSET IN HYBRID ENVIRONMENTS Most companies falter in scaling culture in hybrid or remote environments by relying solely on a top-down approach. The assumption is often that those in management positions have the best ideas for keeping everyone informed. However, in a remote setting, this often translates to an overreliance on written communication like emails and chat channels, leading to less verbal communication and actual interaction. Instead of dictating, actively seek input from your teams on what information they want, the preferred cadence, and how to best share it in a distributed environment. Continuously check in with your team about whats working and what could be better regarding communication strategies. What works today might not be effective next month, so being willing to adapt and evolve your approach is crucial for sustained growth. 3. BUILD CONNECTION TO BREAK DOWN SILOS The most damaging communication silos emerge when people arent connected, a problem exacerbated in remote environments. To dismantle these silos, build connection directly into your team processes. Start by involving and engaging team members in the hiring process of their peers, which is a foundational step toward creating relationships and making communication easier. If a position youre hiring interacts with another department, include someone from that team in the hiring process. Youre building connection and communication from the start. Beyond hiring, work with your team to identify and establish clear expectations for how youll work together, support one another, and communicate. These team agreements should be collaborative guidelines that foster commitment and ownership because the team themselves generated the ideas. For instance, a team agreement could be to go direct when issues arise, preventing festering problems and encouraging proactive, respectful dialogue to gain clarity or get things back on track. 4. EMBRACE TRANSPARENCY, ESPECIALLY DURING TOUGH TIMES Effective communication built on trust and transparency can lead to remarkable outcomes, even in the face of significant challenges. We once worked with a client that had fostered a culture of high performance, characterized by open, two-way communication and a belief in their team members capabilities. When they lost a major customer, facing the need to reduce costs quickly without layoffs, they mobilized cross-functional teams involving employees from all levels, from senior leadership to production line workers. Within 60 days, these teams identified over a million dollars in cost savings. This success significantly boosted morale and financial gains. Employees felt empowered and excited by their collective contribution, asking, Whats our next goal? This example highlights how transparent communicationespecially when delivering tough newsand actively involving employees in finding solutions can galvanize a workforce and lead to both execution gains and enhanced morale. 5. ASK MORE OPEN-ENDED QUESTIONS The most impactful communication habit you should adopt is simple: Ask questions. Encourage your direct reports to ask their teams questions like, What are we doing to improve communication within our group? or What ideas do your teams have for ways to improve communication? This approach signals the importance of communication as a strategic element and encourages a different kind of thinking and action within teams. After all, people typically do what they are asked about. Open-ended questions are particularly effective as they prompt deeper thought and allow for a broader exploration of ideas, helping you paint a bigger picture of your vision when clarifying questions arise. This fosters a more engaged, two-way conversation that leads to greater commitment and better solutions from your teams. By approaching communication as a two-way street, challenging top-down norms, and asking strategic questions, you can empower your teams and ensure your culture thrives, no matter how much your organization scales.
Category:
E-Commerce
If you ask a calculator to multiply two numbers, it multiplies two numbers: end of story. It doesnt matter if youre doing the multiplication to work out unit costs, to perpetuate fraud, or to design a bombthe calculator simply carries out the task it has been assigned. Things arent always so simple with AI. Imagine your AI assistant decides that it doesnt approve of your companys actions or attitude in some area. Without consulting you, it leaks confidential information to regulators and journalists, acting on its own moral judgment about whether your actions are right or wrong. Science fiction? No. This kind of behavior has already been observed under controlled conditions with Anthropics Claude Opus 4, one of the most widely used generative AI models. The problem here isn’t just that an AI might “break” and go rogue; the danger of an AI taking matters into its own hands can arise even when the model is working as intended on a technical level. The fundamental issue is that advanced AI models don’t just process data and optimize operations. They also make choices (we might even call them judgments) about what they should treat as true, what matters, and what’s allowed. Typically, when we think of AIs alignment problem, we think about how to build AI that is aligned with the interests of humanity as a whole. But, as Professor Sverre Spoelstra and my colleague Dr. Paul Scade have been exploring in a recent research project, what Claudes whistleblowing demonstrates is a subtler alignment problem, but one that is much more immediate for most executives. The question for businesses is, how do you ensure that the AI systems you’re buying actually share your organization’s values, beliefs, and strategic priorities? Three Faces of Organizational Misalignment Misalignment shows up in three distinct ways. First, theres ethical misalignment. Consider Amazon’s experience with AI-powered hiring. The company developed an algorithm to streamline recruitment for technical roles, training it on years of historical hiring data. The system worked exactly as designedand that was the problem. It learned from the training data to systematically discriminate against women. The system absorbed a bias that was completely at odds with Amazons own stated value system, translating past discrimination into automated future decisions. Second, theres epistemic misalignment. AI models make decisions all the time about what data can be trusted and what should be ignored. But their standards for determining what is true wont necessarily align with those of the businesses that use them. In May 2025, users of xAI’s Grok began noticing something peculiar: the chatbot was inserting references to “white genocide” in South Africa into responses about unrelated topics. When pressed, Grok claimed that its normal algorithmic reasoning would treat such claims as conspiracy theories and so discount them. But in this case, it had been “instructed by my creators” to accept the white genocide theory as real. This reveals a different type of misalignment, a conflict about what constitutes valid knowledge and evidence. Whether Groks outputs in this case were truly the result of deliberate intervention or were an unexpected outcome of complex training interactions, Grok was operating with standards of truth that most organizations would not accept, treating contested political narratives as established fact. Third, theres strategic misalignment. In November2023, watchdog group MediaMatters claimed that Xs (formerly Twitter) adranking engine was placing corporate ads next to posts praising Nazism and white supremacy. While X strongly contested the claim, the dispute raised an important point. An algorithm that is designed to maximize ad views might choose to place ads alongside any highengagement content, undermining brand safety to achieve the goals of maximizing viewers that were built into the algorithm. This kind of disconnect between organizational goals and the tactics algorithms use in pursuit of their specific purpose can undermine the strategic coherence of an organization. Why Misalignment Happens Misalignment with organizational values and purpose can have a range of sources. The three most common are: Model design. The architecture of AI systems embeds philosophical choices at levels most users never see. When developers decide how to weight different factors, they’re making value judgments. A healthcare AI that privileges peer-reviewed studies over clinical experience embodies a specific stance about the relative value of formal academic knowledge versus practitioner wisdom. These architectural decisions, made by engineers who may never meet your team, become constraints your organization must live with. Training data. AI models are statistical prediction engines that learn from the data they are trained on. And the content of the training data means that a model may inherit a broad range of historical biases, statistically normal human beliefs, and culturally specific assumptions. Foundational instructions. Generative AI models are typically given a foundational set of prompts by developers that shape and constrain the outputs the models will give (often referred to as “system prompts” or “policy prompts” in technical documentation). For instance, Anthropic embeds a “constitution” in its models that requires the models to act in line with a specified value system. While the values chosen by the developers will normally aim at outcomes that they believe to be good for humanity, there is no reason to assume that a given company or business leader will agree with those choices. Detecting and Addressing Misalignment Misalignment rarely begins with headlinegrabbing failures; it shows up first in small but telling discrepancies. Look for direct contradictions and tonal inconsistenciesmodels that refuse tasks or chatbots that communicate in an off-brand voice, for instance. Track indirect patterns, such as statistically skewed hiring decisions, employees routinely correcting AI outputs, or a rise in customer complaints about impersonal service. At the systemic level, watch for growing oversight layers, creeping shifts in strategic metrics, or cultural rifts between departments running different AI stacks. Any of these are early red flags that an AI systems value framework may be drifting from your own. Four ways to respond Stresstest the model with valuebased redteam prompts. Take the model through deliberately provocative scenarios to surface hidden philosophical boundaries before deployment. strong>Interrogate your vendor. Request model cards, trainingdata summaries, safetylayer descriptions, update logs, and explicit statements of embedded values. Implement continuous monitoring. Set automated alerts for outlier language, demographic skews, and sudden metric jumps so that misalignment is caught early, not after a crisis. Run a quarterly philosophical audit. Convene a crossfunctional review team (legal, ethics, domain experts) to sample outputs, trace decisions back to design choices, and recommend course corrections. The Leadership Imperative Every AI tool comes bundled with values. Unless you build every model in-house from scratchand you wontdeploying AI systems will involve importing someone elses philosophy straight into your decisionmaking process or communication tools. Ignoring that fact leaves you with a dangerous strategic blind spot. As AI models gain autonomy, vendor selection becomes a matter of making choices about values just as much as about costs and functionality. When you choose an AI system, you are not just selecting certain capabilities at a specified price pointyou are importing a system of values. The chatbot you buy wont just answer customer questions; it will embody particular views about appropriate communication and conflict resolution. Your new strategic planning AI wont just analyze data; it will privilege certain types of evidence and embed assumptions about causation and prediction. So, choosing an AI partner means choosing whose worldview will shape daily operations. Perfect alignment may be an unattainable goal, but disciplined vigilance is not. Adapting to this reality means that leaders need to develop a new type of philosophical literacy: the ability to recognize when AI outputs reflect underlying value systems, to trace decisions back to their philosophical roots, and to evaluate whether those roots align with organizational purposes. Businesses that fail to embed this kind of capability will find that they are no longer fully in control of their strategy or their identity. This article develops insights from research being conducted by Professor Sverre Spoelstra, an expert on algorithmic leadership at the University of Lund and Copenhagen Business School, and my Shadoka colleague Dr. Paul Scade.
Category:
E-Commerce
The internet wasnt born wholeit came together from parts. Most know of ARPANET, the internets most famous precursor, but it was always limited strictly to government use. It was NSFNET that brought many networks together, and the internet that we use today is almost NSFNET itself. Almost, but not quite: in 1995, the government that had raised the internet from its infancy gave it a firm shove out the door. Call it a graduation, or a coming of age. I think of it as the internet getting its first real job. In the early 1980s, the National Science Foundation sought to establish the United States as a leader in scientific computing. The plan required a fleet of supercomputers that researchers could readily use, a difficult feat when the computers routinely cost more than the buildings that housed them. Business computing had solved similar problems with time-sharing and remote terminals, and ARPANET had demonstrated that terminals could be connected to computers across the country using a packet-switching network. This story is part of 1995 Week, where well revisit some of the most interesting, unexpected, and confounding developments in tech 30 years ago. The Computer Science Network, or CSNET, was the NSFs first foray into wide area networking. It connected universities that didnt have defense contracts and, as a result, had been left out of ARPANET. With dozens of sites, CSNET was much smaller than ARPANET but proved that a group of universities could share computing resources. When the NSF funded five cutting-edge supercomputing centers in 1985, it planned to make them available to users over a similar network. The problem was that big computers invited big data: CSNET just wasnt fast enough for interactive work with large data sets, and it was falling further behind as traffic doubled about every two weeks. After a sluggish 56 Kbps pilot effort (about a thousand times slower than todays common broadband connections), the NSF contracted the University of Michigan to develop an all-new replacement based on MERITa Michigan inter-university network that had already started to expand its high-speed digital telephone and geostationary satellite links into other states. In 1987, the MERIT team brought on IBM and upstart long-distance carrier MCI, freshly invigorated by the antitrust breakup of their principal competitor and truly feeling their oats. They worked at a breakneck pace. In under a year, NSFNET connected the supercomputing centers and a half dozen regional networks at blistering T1 speeds: 1.5 Mbpsan almost 28-fold increase. Just after 8 p.m. on June 30, 1988, Hans-Werner Braun, the projects co-principal investigator, sent an email to the NSFNET mailing list to announce these new high-capacity linksamong the fastest long-distance computer connections ever deployedwith typical scientific understatement: The NSFnet Backbone has reached a state where we would like to more officially let operational traffic on. [Image: reivax/Flickr] Brauns email received little notice at the time, the NSF wrote in a 2008 announcement. But those simple words announced the birth of the modern Internet. NSFNET was a runaway success. Besides its massive capacity, the network maintained an open door for interconnection. Overseas academic computer networks established peer connections with NSFNET, and in 1989 the federal government opened two Federal Internet Exchanges that routed traffic between NSFNET, ARPANET, and other government networks. The superior speed of NSFNET meant that these exchanges served mostly to bring NSFNET to federal users, and ARPANETs fate was sealed. The military network, birthplace of many internet technologies, was deemed obsolete and decommissioned the next year. At the turn of the 1990s, NSFNET had become the Internet: the unified backbone by which regional and institutional networks came together. NSFNET never stopped growing. It was a remarkable problem: at every stage, NSFNET traffic grew faster than anticipated. During 1989 alone, traffic increased by five times. The state of the art T1 links were overwhelmed, demanding a 1991 upgrade to 45 Mbps T3 connections. To manage the rapidly expanding infrastructure, the original NSFNET partners formed Advanced Network and Services (ANS). ANS was an independent nonprofit that could be called the first backbone ISP, the service provider that service providers themselves connected to. [Image: Merit Network, Inc., NCSA, and the National Science Foundation/Wikimedia Commons] The popularity of this new communications system was not limited to government and academia. Private industry took note as well. During the 1980s, online services had sprouted: companies like CompuServe, PlayNet, and AOL that are often considered early ISPs but were, in fact, something else. Online services, for both businesses and consumers, were walled gardens. They descended from time-sharing systems that connected users to a single computer, providing only a curated experience of software provided by the online service itself. The internet, in the tradition of ARPANET and especially NSFNET, was very different. It was a collection of truly independent networks, autonomous systems, with the freedom to communicate across geographical and organizational boundaries. It could feel like chaos, but it also fostered innovation. The internet offered possibilities that the online services never could. Douglas Van Houweling, director of the MERIT office, called NSFNETs university origin he only community that understands that great things can happen when no ones in charge. At first, it was contractors who took their business to the internet. ARPANET had always been strictly for government business, but still, companies with the privilege of ARPANET connections found it hard not to use them for other work. Despite prohibitions, ARPANET users exchanged personal messages, coordinated visits, and even distributed the first spam. NSFNETs much wider scope, welcoming anyone with a nexus to research or education, naturally invited users to push the limits further. Douglas Van Houweling [Photo: ImaginingtheInternet/Wikimedia Commons] Besides, the commercial internet was starting to form. CERN engineer Tim Berners-Lee had invented HTML and, along with it, the World Wide Web. In 1993, NCSAone of the same NSF supercomputing centers that NSFNET was built to connectreleased Mosaic, the first popular web browser. Early private ISPs, companies like PSINet and Cerfnet, started out as regional academic networks (New York and Californias). There was obvious business interest, and for cash-strapped academic networks paying customers were hard to turn down. NSFNET went into business on its own, with ANS establishing its own for-profit commercial subsidiary called ANS CO+RE. The term internet backbone still finds use today, but in a less literal sense. NSFNET truly was the spine of the early 1990s internet, the only interconnection between otherwise disparate networks. It facilitated the internets growth, but it also became a gatekeeper: NSF funding came with the condition that it be used for research and education. NSFNET had always kept a somewhat liberal attitude towards its users online activities, but the growth of outright for-profit networks made the conflict between academia and commerce impossible to ignore. Several commercial ISPs established their own exchange, an option for business traffic to bypass NSFNET, but it couldnt provide the level of connectivity that NSFNET did. Besides, ANS itself opposed fragmentation of the internet and refused to support direct interconnection between other ISPs. In 1992, a series of NSFNET policy changes and an act of Congress opened the door to business traffic on a more formal basis, but the damage was done. A divide had formed between the internet as an academic venture and the internet as a business, a divide that was only deepened by mistrust between upstart internet businesses and incumbent providers ANS, IBM, and MCI. The network was not the only place that cracks formed. Dating back to ARPANET, a database called the Domain Name System maintained a mapping between numeric addresses and more human-friendly names. While DNS was somewhat distributed, it required a central organization to maintain the top level of the hierarchy. There had been different databases for different networks, but consolidation onto NSFNET required unifying the name system as well. By 1993, all of the former name registries had contracted the work to a single company called Network Solutions. At first, Network Solutions benefited from the same federal largesse as NSFNET. Registry services were funded by government contracts and free to users. Requests came faster and faster, though, and the database grew larger and larger. In 1995, Network Solutions joined the ranks of the defense industrial complex with an acquisition by SAIC. Along with the new owner came new terms: SAIC negotiated an amendment to the NSF contracts that, for the first time, introduced a fee to register a domain name. Claiming a name on the internet would run $100 per two years. By then, commercial ISPs had proliferated. Despite policy changes, NSFNET remained less enthusiastic about commercial users than academic ones. Besides, traffic hadnt stopped growing, and improved routing technologies meant the network could scale across multiple routes. The internet became competitive. MCI, benefiting from their experience operating NSFNET links, had built its own backbone network. Sprint, never far behind MCI, had one too. ANS reorganized their assets, placing much of their backbone infrastructure under their commercial operations. Government support of the increasingly profit-driven internet seemed unwise and, ultimately, unnecessary. In April of 1995, the internet changed: NSF shut down the NSFNET backbone. The government funded, academically motivated core of the internet was replaced by a haphazard but thriving interconnection of commercial ventures. ANS, now somewhat lost for purpose, stepped out into the new world of internet industry and sold its infrastructure to AOL. Network Solutions became embroiled in a monopoly controversy that saw DNS reorganized into a system of competitive private registrars. Modems became standard equipment on newly popular personal computers, and millions of Americans dialed into a commercial ISP. We built communities, businesses, and the shape of the 21st century over infrastructure that had been, just years before, a collection of universities with an NSF grant. The internet, born in the 1960s, spent its young adult years in the university. It learned a lot: the policies, the protocols, the basic shape of the internet, all solidified under the tutelage of research institutions and the NSF. And then, the internet graduated. It went out, got a job, and found its own way. Just where that way leads, were still finding out.
Category:
E-Commerce
All news |
||||||||||||||||||
|