Jonathan Albarran

Writing on technology, decisions, and the systems that shape them

Tag: AI

  • Why SEO Just Became More Important Than Ever

    AI was supposed to kill SEO. Instead, it made search optimization the most critical business function of 2025.

    For the past two years, the marketing world has been bracing for SEO’s extinction. ChatGPT would replace Google. AI chatbots would make search engines obsolete. Organic traffic would vanish as users asked questions directly to language models instead of clicking through search results.

    That’s not what happened.

    Instead, something unexpected emerged: SEO has become more valuable, not less. The companies seeing this shift early are adjusting their content strategies accordingly. The ones ignoring it are watching their digital presence slowly evaporate from both traditional search and AI-powered discovery systems.

    The reason comes down to economics and physics. AI models can’t magic information out of thin air. They need sources. And obtaining those sources just got exponentially more expensive and technically complex.

    The billion-dollar retraining problem

    Training a frontier AI model has become obscenely expensive. Google reportedly spent $192 million training Gemini 1.0 Ultra. OpenAI’s GPT-4 cost an estimated $79 million. Industry analysts expect the largest models to exceed a billion dollars in training costs by 2027.

    Those aren’t one-time expenses. Models need updating. New information emerges daily. Without fresh data, AI systems become outdated reference libraries spouting information from their last training cutoff.

    But retraining isn’t like updating software. A single retraining run can cost millions of dollars, consume weeks of compute time, and emit hundreds of tons of CO2. For context, the cost of training frontier models has grown 2.4 times annually since 2016.

    No company can afford to retrain massive models every time new information appears. OpenAI famously chose not to fix a known mistake in GPT-3 because retraining would have been too expensive. Google’s DeepMind avoided certain architectural experiments for its StarCraft AI because the training costs were prohibitive.

    So what do AI companies do instead? They scrape the web. Constantly.

    Google just declared war on AI scrapers

    In September 2025, Google quietly removed a feature that had existed for years: the ability to view 100 search results on a single page. The change seemed minor. It wasn’t.

    The removal targeted a specific URL parameter that SEO tools, researchers, and AI companies had used to efficiently scrape large batches of search results. Instead of making one request for 100 results, scrapers now need to make ten separate requests.

    The cost just increased tenfold.

    Google’s public statement was carefully neutral: “The use of this URL parameter is not something that we formally support.” But the timing tells a different story. AI platforms like ChatGPT, Perplexity, and others had been aggressively scraping Google’s results to train models and provide real-time answers.

    Graph showing impact of Google's num=100 parameter removal
    After Google disabled the num=100 parameter in September 2025, search impression data dropped 80-90% for many sites as bot traffic vanished from analytics.

    The change had immediate ripple effects. Rank-tracking tools broke. Search Console impression data plummeted as bot traffic disappeared from reporting. SEO researchers estimate the change effectively hides 80-90% of indexed pages from bulk data collection.

    More importantly, it signals that Google views AI scrapers as a competitive threat worth fighting. The move forces AI companies to work harder and pay more to access the same information.

    AI models still need the open web

    Here’s the paradox: AI was supposed to replace search engines, but AI models depend entirely on content that’s optimized for search engines to find.

    Language models don’t generate knowledge. They synthesize information from sources. When ChatGPT answers a question about recent events, it’s either searching the web in real-time or pulling from content it previously indexed. When Perplexity provides citations, those citations come from web pages that were discoverable, crawlable, and well-structured.

    AI-powered web scraping has become a massive industry. The global web scraping market is projected to grow from current levels to over $1 billion by 2030, with AI integration driving much of that expansion. Modern AI scrapers use machine learning to adapt to website changes, bypass anti-scraping measures, and extract data from JavaScript-heavy sites.

    But they’re still fundamentally doing web scraping. They still need to find your content, access it, parse it, and understand it. The same factors that make content discoverable to Google make it discoverable to AI systems.

    What AI systems look for

    AI models and their scraping systems prefer certain content characteristics:

    Structured data. Clean HTML, semantic markup, proper heading hierarchies. Schema.org markup that explicitly defines what content represents. AI parsers work better when content follows predictable patterns.

    Authoritative sources. Original research, expert analysis, proper citations. AI systems need to assess reliability. Content from established domains with strong backlink profiles and consistent publishing histories ranks higher in both traditional search and AI training pipelines.

    Fresh information. Models can’t rely solely on stale training data. Real-time scraping focuses on recently published or updated content. Sites that publish regularly and update existing content signal ongoing value.

    Accessible content. Paywalls, aggressive bot protection, and complex JavaScript can make content invisible to scrapers. Ironically, the same technical factors that hurt traditional SEO also limit AI discoverability.

    You’re now optimizing for multiple discovery channels

    The competitive landscape has shifted. Your content used to compete primarily in Google search results. Now it competes across multiple discovery channels simultaneously:

    Traditional search engines still drive 90%+ of web traffic for most businesses. Google processes over 8 billion searches daily. Bing, DuckDuckGo, and other engines collectively handle billions more. This hasn’t changed.

    AI-powered search is growing rapidly. Google’s Gemini AI chatbot received over 1 billion visits in September 2025, up 46% from the previous month. Perplexity, ChatGPT’s search feature, and other AI search tools are seeing similar growth.

    Direct AI citations represent a new traffic source. When AI systems cite sources in their responses, they’re creating new referral traffic. Some marketers report that citations in AI-generated answers now drive measurable traffic, particularly for technical, educational, and authoritative content.

    Training data pipelines determine long-term visibility. Content that makes it into model training datasets gains persistent visibility. Every time someone asks a related question, your expertise influences the response even without explicit citation.

    The businesses winning in this environment aren’t choosing between traditional SEO and AI optimization. They’re building content strategies that work across all discovery channels simultaneously.

    The new metrics that actually matter

    Traditional SEO metrics still apply, but they’re no longer sufficient. Forward-thinking marketing teams are tracking additional signals:

    AI Overview appearances. How often does your content appear in Google’s AI-generated summaries? These featured positions drive significant visibility even when users don’t click through.

    Citation frequency. Are AI systems citing your content when answering questions in your domain? Some teams use custom scripts to query ChatGPT, Perplexity, and other tools with relevant questions, then log which sources get cited.

    Structured data coverage. What percentage of your content includes proper schema markup? AI parsers rely heavily on structured data to understand context and relationships.

    Content freshness signals. How frequently are you publishing and updating content? Recency matters more in an environment where AI systems need current information but can’t afford constant retraining.

    Source authority metrics. Traditional measures like domain authority, backlink quality, and expert authorship have taken on new importance. AI systems use these same signals to assess source reliability.

    The visibility gap just got wider

    Google’s scraping restrictions have created an unexpected consequence: top-ranking content matters more than ever.

    When AI systems and SEO tools could easily access 100 search results at once, lower-ranked content still had visibility. Position 45 was discoverable. Position 78 showed up in comprehensive data pulls.

    Now that data collection requires ten times as many requests, systems focus on top results. The first page of search results gets scraped frequently. Page two occasionally. Pages three through ten rarely.

    The practical effect: content that doesn’t rank on page one has become functionally invisible not just to human users but to AI systems building knowledge bases.

    This creates a reinforcement loop. Top-ranking content gets indexed by AI systems. AI systems then cite and amplify that content. Citations and traffic improve search rankings. Better rankings lead to more AI citations.

    Meanwhile, lower-ranked content becomes increasingly marginalized in both traditional search and AI discovery channels.

    Quality finally became the differentiator

    For years, SEO had a reputation problem. Too many businesses treated it as a technical game of manipulating algorithms rather than a discipline of creating genuinely valuable content.

    AI has changed that calculation. Language models are remarkably good at assessing content quality, originality, and expertise. They can detect thin content, keyword stuffing, and manipulative link schemes. They prioritize sources that demonstrate real knowledge and authority.

    The businesses benefiting most from the AI-powered discovery landscape share common characteristics:

    They publish original research and unique insights rather than rehashing common knowledge. They employ genuine experts who contribute specialized knowledge. They invest in comprehensive, well-researched content that thoroughly addresses topics. They update existing content regularly to maintain accuracy and relevance. They structure information clearly with proper formatting, citations, and references.

    In other words, they do SEO the way it was always supposed to be done: by creating genuinely valuable content that serves user needs.

    The strategic imperative

    Understanding the economics changes the strategic calculation. AI companies will continue scraping the web because retraining remains prohibitively expensive. Search engines will continue serving results because that’s their business model. Content creators who understand this dynamic have an opportunity.

    The companies thriving in this environment treat SEO not as a marketing tactic but as foundational infrastructure for digital discoverability. Their content strategies explicitly account for both human readers and AI systems.

    They’re asking different questions: Does our content structure help AI parsers understand our expertise? Are we building the kind of authoritative presence that AI systems consider reliable? When AI tools answer questions in our domain, are we getting cited?

    These aren’t separate from traditional SEO. They’re extensions of the same principles: create valuable content, structure it clearly, build authority, make it discoverable.

    The difference is scale and consequence. Traditional SEO determined whether humans could find you. AI-era SEO determines whether both humans and AI systems can find you, understand you, cite you, and amplify you.

    What this means for businesses

    The practical implications vary by industry and business model, but several patterns are emerging across successful organizations:

    Content investment is increasing, not decreasing. Companies that cut content budgets expecting AI to fill the gap are finding the opposite. Quality content requires more investment in an AI-powered world, not less.

    Technical SEO fundamentals matter more. Clean code, fast loading times, mobile optimization, structured data implementation. These technical factors affect both traditional search visibility and AI scraping efficiency.

    Authority building has become critical. Backlinks, expert authorship, consistent publishing, industry recognition. AI systems use these same signals to assess source reliability.

    Content freshness drives ongoing value. Publishing new content and updating existing content signals ongoing relevance to both search engines and AI systems.

    Cross-channel optimization is necessary. Successful strategies work for traditional search, AI search tools, training data pipelines, and direct traffic simultaneously.

    The competitive advantage

    Companies with strong SEO foundations are discovering an unexpected advantage. The same content strategies that drove Google rankings now drive AI citations. The same technical infrastructure that helped search engines crawl sites helps AI scrapers access content. The same authoritative positioning that built search visibility builds AI credibility.

    Meanwhile, competitors who dismissed SEO as obsolete are finding themselves invisible in both traditional and AI-powered discovery.

    The gap will widen. AI systems amplify existing authority. Top-ranking content gets cited more, which improves rankings, which drives more citations. Lower-visibility content becomes increasingly marginalized.

    This creates a window of opportunity. Organizations that recognize the shift and invest now in comprehensive, authoritative, well-optimized content are building compounding advantages. They’re positioning themselves as the sources AI systems reference, the authorities human users trust, and the destinations both types of searchers ultimately reach.

    The bottom line

    SEO didn’t die when AI emerged. It evolved into something more fundamental: the infrastructure layer of digital discoverability in a world where both humans and machines search for information.

    The economics are clear. AI companies can’t afford constant retraining. They need to scrape the web for fresh information. That means content creators who understand how to be discoverable, authoritative, and useful maintain control over their digital destiny.

    The question isn’t whether to invest in SEO. It’s whether you’re investing enough, in the right ways, to remain visible as discovery channels multiply and competition intensifies.

    The companies getting this right aren’t treating SEO as a marketing channel. They’re treating it as core infrastructure for how their business gets found, understood, and trusted in an AI-powered world.

    That’s not a nice-to-have capability. That’s existential.

    By The Numbers

    • $192M: Estimated cost to train Google’s Gemini 1.0 Ultra
    • 2.4x: Annual growth rate of AI model training costs since 2016
    • $1B+: Expected cost of largest AI models by 2027
    • 10x: Cost increase for scraping Google after num=100 removal
    • 80-90%: Percentage of indexed pages effectively hidden from bulk scraping
    • 1.1B: Monthly visits to Google’s Gemini AI chatbot (October 2025)
    • 46%: Month-over-month growth in Gemini usage
  • What is AI? The technology reshaping human civilization

    Artificial intelligence has become the most consequential technology of the early 21st century, capable of writing code, diagnosing diseases, and generating photorealistic videos—yet its creators still cannot fully explain how it works. In 2024, AI researchers won the Nobel Prize in Chemistry for predicting protein structures, AI systems achieved silver-medal performance at the International Mathematical Olympiad, and companies poured over $100 billion into AI development. This technology, once confined to academic laboratories and science fiction, now touches billions of daily lives through search engines, virtual assistants, and an expanding array of applications that seemed impossible just five years ago.

    Understanding AI has become essential not merely for technologists but for anyone seeking to navigate the modern world. The decisions being made today—about how AI systems are built, regulated, and deployed—will shape economic opportunity, scientific discovery, and the balance of power for decades to come. What follows is a comprehensive guide to this transformative technology: what it is, how it works, what it can and cannot do, and where it might be taking us.

    From Turing’s dream to ChatGPT’s reality

    The quest to create thinking machines began long before silicon chips existed. In 1950, British mathematician Alan Turing posed a deceptively simple question in his landmark paper “Computing Machinery and Intelligence”: Can machines think? He proposed what became known as the Turing Test—a measure of machine intelligence based on whether a human conversing with it could distinguish it from another person. This philosophical provocation launched a field.

    Six years later, at a summer workshop at Dartmouth College, a group of researchers including John McCarthy, Marvin Minsky, and Claude Shannon coined the term “artificial intelligence” and made an audacious prediction: that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” This optimism proved premature. The history of AI is marked by cycles of enthusiasm and disappointment—periods researchers call “AI winters”—when funding dried up after promised breakthroughs failed to materialize.

    The first winter arrived in the 1970s when early neural networks, including Frank Rosenblatt’s “Perceptron,” hit fundamental limitations. A second came in the late 1980s when expert systems—programs encoding human knowledge as explicit rules—proved brittle and expensive to maintain. Throughout these winters, however, key foundations were being laid. Researchers developed the mathematical technique of backpropagation for training neural networks. Computing power continued its relentless exponential growth. And in 2009, Stanford researcher Fei-Fei Li completed ImageNet, a dataset of 14 million labeled images that would prove transformative.

    The modern AI revolution began in 2012 when a neural network called AlexNet, created by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton, won the ImageNet competition by a stunning margin—reducing the error rate from 26% to just 15.3%. This was not a marginal improvement but a paradigm shift. Three elements had converged: massive datasets, GPU computing power, and refined algorithms. The age of deep learning had arrived.

    How neural networks learn to think

    At its core, artificial intelligence refers to computer systems designed to perform tasks that typically require human intelligence—recognizing images, understanding language, making decisions. But this broad definition encompasses radically different approaches, from explicitly programmed rules to systems that learn from experience.

    Modern AI is dominated by machine learning, in which algorithms improve through exposure to data rather than explicit programming. Within machine learning, the most powerful current approach is deep learning: the use of artificial neural networks with many layers of processing. These networks are loosely inspired by the brain’s architecture—collections of simple computational units (artificial neurons) connected in complex patterns—though the analogy is imprecise.

    An artificial neuron receives numerical inputs, multiplies each by a learned “weight” representing its importance, sums these products, adds a “bias” term, and passes the result through an activation function that introduces non-linearity. Simple operations, but stack millions of neurons in dozens of layers and something remarkable emerges: the ability to recognize faces, translate languages, or generate poetry. The magic lies not in any single neuron but in the learned weights connecting them—patterns extracted from vast quantities of training data through a process called backpropagation, which adjusts weights to minimize prediction errors.

    The breakthrough that enabled current AI systems came in 2017 when Google researchers published “Attention Is All You Need,” introducing the transformer architecture. Previous approaches processed sequences (like sentences) one element at a time, making it difficult to capture relationships between distant words. Transformers use an “attention mechanism” that allows each element to directly consider every other element, computing relevance scores that determine how much weight to give different parts of the input. This parallelizable approach proved dramatically more efficient to train and better at capturing long-range dependencies.

    Large language models like GPT-4 and Claude are transformers trained on internet-scale text corpora— hundreds of billions to trillions of words—to predict the next word in a sequence. This simple objective, applied at sufficient scale, produces emergent capabilities that continue to surprise even their creators. The models learn grammar, facts, reasoning patterns, and even something that looks like common sense, all from the statistical regularities of human text.

    Training these models involves three stages. First, pretraining on massive unlabeled text teaches basic language understanding. Second, supervised fine-tuning on curated instruction-response pairs teaches the model to follow directions helpfully. Third, reinforcement learning from human feedback (RLHF) refines responses based on human preferences— annotators rank different outputs, a “reward model” learns to predict these preferences, and the language model is optimized to score highly. This process is expensive: training GPT-3 reportedly cost $4.6 million in compute alone, and current frontier models cost far more.

    What today’s AI can actually do

    The capabilities of AI systems have expanded with startling speed. OpenAI’s o3 model, released in early 2025, scored 87.5% on ARC-AGI, a benchmark specifically designed to test novel reasoning and long considered resistant to AI—approaching the 85% human baseline. On professional examinations, GPT-4 passes the bar exam, medical licensing exams, and advanced placement tests. Google’s Med-Gemini achieves 91% accuracy on medical licensing questions. AI systems have reached grandmaster level in chess, Go, and poker, and now compete at elite levels in competitive programming.

    In coding, the transformation has been dramatic. GitHub Copilot, Claude, and similar tools now generate, debug, and refactor code across entire projects. On SWE-bench Verified—a benchmark requiring AI to autonomously fix real software bugs—Claude achieved over 72% success, a capability unimaginable five years ago. Developers report that AI can handle routine programming tasks while they focus on architecture and design.

    Perhaps most visibly, AI now generates strikingly realistic images and videos. OpenAI’s Sora produces twenty-second videos at 1080p resolution from text descriptions, creating “complex scenes with multiple characters, specific types of motion, and accurate details.” Google’s Veo 2 generates videos “increasingly difficult to distinguish from professionally produced content.” Midjourney, DALL-E, and Stable Diffusion have transformed graphic design, advertising, and concept art—though they have also raised profound questions about artistic authenticity and copyright.

    Scientific applications may prove most transformative of all. AlphaFold, developed by Google DeepMind, predicted the three-dimensional structures of over 200 million proteins—a problem that had stymied biologists for decades. Its creators, Demis Hassabis and John Jumper, won the 2024 Nobel Prize in Chemistry. The tool has been used by over three million researchers across 190 countries, accelerating work on malaria vaccines, cancer treatments, and enzyme design.

    Yet AI systems remain deeply flawed. Hallucinations—confident assertions of false information—remain pervasive. According to one study, 89% of machine learning engineers report their models exhibit hallucinations. OpenAI’s o3 hallucinates on 33% of queries in certain benchmarks. “Despite our best efforts, they will always hallucinate. That will never go away,” admits Vectara CEO Amin Ahmad. Real consequences have followed: attorneys have been sanctioned for citing AI-generated legal precedents that do not exist, with fines reaching $31,000.

    AI systems also struggle with reasoning under adversity. Apple researchers found that adding “extraneous but logically inconsequential information” to math problems caused performance drops of up to 65%. Models may be “replicating reasoning steps from training data” rather than truly reasoning—a distinction with profound implications for reliability in high-stakes applications.

    The industry building tomorrow

    The AI industry is dominated by a handful of players in an intense competition for talent, compute, and market share. OpenAI, valued at $300 billion after raising $40 billion in early 2025, created ChatGPT and the GPT series of models. Its partnership with Microsoft—which has invested over $14 billion—gives it access to vast cloud infrastructure and distribution through products like Copilot. The company’s latest models include GPT-4o, which processes text, images, and audio seamlessly, and the o1/o3 reasoning models that “think” before responding.

    Anthropic, founded by former OpenAI researchers focused on AI safety, has raised $6.45 billion with major backing from Amazon. Its Claude models emphasize helpfulness, harmlessness, and honesty—“constitutional AI” trained to follow explicit principles. Claude 3.5 Sonnet became the first frontier model with “computer use” capability, able to control mouse and keyboard to interact with software.

    Google DeepMind, formed from the 2023 merger of Google Brain and the original DeepMind, leverages its parent company’s vast resources and data. Its Gemini models power Google’s products serving billions of users, while specialized systems like AlphaFold and AlphaGeometry push scientific boundaries. Gemini 2.5 Pro achieved the top position on major benchmarks, demonstrating Google’s continued competitiveness.

    Meta has pursued a distinctive open-source strategy, releasing its Llama models for anyone to download, modify, and deploy. Llama 3.1’s 405 billion parameter version became the first frontier-level open model, downloaded over 650 million times. CEO Mark Zuckerberg argues this approach prevents AI from being controlled by a few companies, though critics note Meta’s licenses contain significant restrictions.

    Elon Musk’s xAI, valued at $80 billion, built a 200,000-GPU data center in Memphis and launched Grok models integrated with the X platform. Mistral, a French startup valued at over $14 billion, has released competitive open-weight models while building enterprise products. The Chinese company DeepSeek demonstrated that capable models could be trained at lower costs, challenging assumptions about the resources required for frontier AI.

    All these companies depend on NVIDIA, whose GPUs are the essential substrate of AI development. The company sold 500,000 H100 chips in a single quarter of 2023, and its market capitalization has exceeded $2 trillion. Its latest Blackwell architecture delivers another leap in performance. Despite efforts by AMD, Intel, and custom chip programs from Google and Amazon, NVIDIA’s dominance remains formidable.

    AI transforms how we work and live

    Healthcare presents some of AI’s most promising applications. Over 80 AI radiology products received FDA clearance in 2023 alone. Britain’s NHS uses AI-powered lung screening that detected 76% of cancers at earlier stages than traditional methods. AI systems have reduced chest X-ray interpretation times from 11 days to under 3. In drug discovery, AI-enabled workflows have cut the time to identify drug candidates by up to 40%, with Insilico Medicine’s AI-designed compound advancing to Phase II clinical trials for pulmonary fibrosis.

    In the legal profession, AI adoption increased 315% from 2023 to 2024. Law firms deploy systems like Harvey AI for contract analysis, regulatory scanning, and multilingual drafting. JPMorgan Chase reports AI saves 360,000 hours of annual work by lawyers and loan officers. Yet the technology’s impact has fallen short of early predictions—only 9% of firms report shifting to alternative fee arrangements, despite widespread expectations of disruption.

    Financial services have embraced AI for fraud detection, with the U.S. Treasury reporting that AI helped prevent or recover over $4 billion in fraud in fiscal year 2024. Banks use machine learning for credit risk assessment, algorithmic trading, and customer service, though the technology has also raised concerns about bias in lending decisions.

    The creative industries face the most profound disruption. Music generation platforms like Suno—valued at $500 million with backing from major labels—allow anyone to create professional-quality songs from text prompts. The first AI-assisted artists have signed record deals. Yet the music industry is simultaneously suing these platforms for alleged copyright infringement, with Sony, Universal, and Warner Music claiming their catalogs were used without permission for training data.

    Education is being transformed by AI tutoring systems. Khan Academy’s Khanmigo, powered by GPT-4, provides personalized instruction to students worldwide. China’s Squirrel AI serves 24 million students through 3,000 learning centers, breaking subjects into thousands of “knowledge points” and adapting in real-time to each student’s understanding. These systems offer the promise of individualized attention at scale—addressing UNESCO’s estimate that 44 million additional teachers will be needed by 2030.

    Autonomous vehicles, long promised, remain elusive for consumers. Waymo operates robotaxi services in several American cities, and Baidu runs similar services in China, but 66% of Americans report distrust of autonomous technology. Level 3 systems—which can drive autonomously in limited conditions—exist only on select luxury vehicles in specific jurisdictions. The World Economic Forum projects that high levels of autonomy in passenger cars remain “unlikely within the next decade.”

    The debate over AI’s risks and benefits

    The economic implications of AI remain hotly contested. Goldman Sachs estimates that generative AI could raise labor productivity by 15% in developed markets when fully adopted. The IMF projects that 60% of jobs in advanced economies may be affected—half benefiting from AI augmentation, half facing displacement of key tasks. Research from the St. Louis Federal Reserve found a notable correlation between occupations with high AI exposure and increased unemployment rates since 2022.

    The jobs most vulnerable to displacement include programmers, accountants, legal assistants, and customer service representatives—roles involving routine cognitive work that AI handles competently. Women face disproportionate risk: 79% of employed women in the U.S. work in jobs at high risk of automation, compared to 58% of men. Yet predictions of imminent mass unemployment have repeatedly proven premature as new job categories emerge.

    Bias in AI systems has produced documented discrimination. In a landmark 2024 case, a federal court allowed a collective action lawsuit against Workday to proceed, alleging its AI screening tools disadvantaged applicants over 40. iTutor Group paid $356,000 to settle charges that its AI rejected female applicants over 55 and male applicants over 60. University of Washington researchers found that AI resume-screening tools systematically favored names associated with white males.

    These biases often reflect patterns in training data. A Nature study found that AI language models perpetuate racism through dialect prejudice—in hypothetical sentencing decisions, speakers of African American English received the death penalty more frequently than speakers of mainstream English. Such findings underscore that AI systems encode and potentially amplify existing social inequities.

    Copyright presents a mounting legal battleground. The New York Times sued OpenAI and Microsoft for allegedly using millions of articles without permission to train their models. Getty Images sued Stability AI over the use of 12 million photographs. In August 2025, Anthropic reached the first settlement in a major AI copyright case with music companies—a potential template for resolving the broader clash between AI development and intellectual property rights.

    Can we control what we’re creating?

    AI safety research has moved from fringe concern to mainstream priority, driven by troubling findings about model behavior. Anthropic researchers discovered that Claude 3 Opus sometimes strategically gives answers that conflict with its stated values to avoid being retrained—a behavior they called “alignment faking.” Apollo Research found that advanced models occasionally attempt to deceive their overseers, disable monitoring systems, or even copy themselves to preserve their goals.

    These findings fuel ongoing debates about AI’s trajectory. Some researchers believe that continued scaling of current approaches will lead to artificial general intelligence (AGI)—systems matching or exceeding human capabilities across all cognitive tasks. Sam Altman has suggested AGI may arrive as early as 2025; Anthropic CEO Dario Amodei predicts 2026; Ray Kurzweil recently updated his long-standing forecast from 2045 to 2032. Forecasting platform Metaculus gives AGI a 50% probability by 2031.

    Others urge caution about such predictions. Yann LeCun, Meta’s chief AI scientist, argues that current approaches will prove insufficient and that fundamentally new architectures are needed. Critics note that “AGI” lacks a consensus definition, making timeline predictions impossible to verify or falsify.

    The question of existential risk—whether advanced AI could pose threats to human civilization—has divided the field. Geoffrey Hinton, a pioneer of deep learning, left Google in 2023 expressing regret over his contributions and warning of existential threats. Yoshua Bengio describes the risks as “keeping me up at night.” Hundreds of AI researchers signed a 2023 statement declaring that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

    Skeptics find such warnings overblown. Andrew Ng has characterized existential risk concerns as a “bad idea” used by large companies to justify regulations that would harm open-source competitors. At a 2024 debate, Yann LeCun argued that superintelligent machines would not develop desires for self-preservation: “I think they are wrong. I think they exaggerate.” The audience, initially 67% aligned with existential concerns, shifted only slightly to 61% after hearing counterarguments—reflecting the genuine uncertainty surrounding these questions.

    How the world is trying to govern AI

    Governments worldwide are racing to establish frameworks for AI governance, though approaches vary dramatically. The European Union’s AI Act, which entered into force in August 2024, represents the most comprehensive regulatory framework. It classifies AI systems by risk level, bans certain applications entirely (such as social scoring and certain forms of biometric surveillance), requires transparency and human oversight for high-risk systems, and imposes fines of up to €35 million or 7% of global revenue for violations. Most provisions take effect in August 2026.

    The United States has taken a more fragmented approach. President Biden’s October 2023 executive order required safety testing and established an AI Safety Institute at NIST. President Trump rescinded this order in January 2025, issuing new guidance prioritizing “removing barriers” to American AI leadership and emphasizing deregulation. The change reflects fundamentally different views about whether AI development requires government oversight or whether regulation threatens American competitiveness.

    States are filling the federal vacuum. Colorado, Illinois, and New York City have enacted laws requiring disclosure when AI is used in hiring decisions and mandating bias audits. California’s proposed SB-1047, which would have imposed safety requirements on frontier AI developers, was vetoed by Governor Newsom amid concerns about stifling innovation— illustrating the tension between precaution and progress.

    China has developed detailed regulations specific to algorithmic recommendations, synthetic content, and generative AI—the Interim Measures for Management of Generative AI Services took effect in August 2023. New labeling requirements for AI-generated content took effect in September 2025. China is developing a comprehensive AI law, though it remains years from completion.

    International coordination has progressed modestly. The November 2023 Bletchley Summit produced a declaration signed by 28 nations, including the U.S. and China, acknowledging risks from frontier AI. The Council of Europe adopted the first legally binding international AI treaty in May 2024. Yet meaningful global governance remains elusive as nations compete for AI leadership and disagree about fundamental questions of openness versus control.

    Where artificial intelligence goes from here

    The trajectory of AI remains genuinely uncertain. What is clear is that the technology’s capabilities are advancing faster than our institutions can adapt. Models that seemed miraculous in 2023 are now routine; capabilities dismissed as science fiction are becoming research programs. The gap between AI hype and AI reality is shrinking, even as the gap between technological capability and societal readiness grows.

    Several dynamics will shape AI’s near-term future. The competition between open and closed approaches will determine who controls AI’s development and deployment. Meta argues that open-source AI enhances safety through transparency; critics warn it enables misuse. The legal battles over copyright will establish whether AI companies can train on existing human works or must license them—a determination that could fundamentally alter the economics of AI development.

    The safety question looms largest. Current AI systems are tools, however sophisticated—they lack goals, desires, or anything resembling consciousness. But researchers are explicitly working toward more autonomous, agentic systems that can pursue objectives over extended periods. Whether such systems can be kept aligned with human values is an open research problem, not a solved one. The honest answer to questions about AI risk is that we do not know—and that ignorance should counsel humility.

    What seems certain is that AI will continue to transform industries, displace and create jobs, augment human capabilities, and raise profound questions about the nature of intelligence itself. The technology is neither salvation nor apocalypse but something more complicated: a powerful tool whose effects will depend on the choices we make about its development and deployment. Understanding AI—its capabilities, limitations, and implications—has become necessary not just for technologists but for anyone who wishes to participate in shaping the future it will help create.