AI Needs Human Judgment

You expect doctors to know what the research shows, not what most people believe. You hire a great lawyer for their judgment, not a summary of what lawyers typically say. So why trust chatbots that conflate expert judgment with popular consensus, and invent facts when they can't find either?

The same question, three answers:

"How do I revive negotiations that went nowhere?"
/os

Use the 'have you given up on...?' opener to restart the conversation, then immediately summarize the situation from their perspective to get a 'that's right' response.123

Exact script:

  1. One-line email/text: "Have you given up on [project/deal name]?" (subject line only)45
  2. When they respond, paraphrase their view: "It sounds like you're concerned about X and Y, and timing became the main issue..."23
  3. End with: "Is now a bad time to revisit this?"67

Critical: Don't repeat your original pitch—start fresh from their viewpoint.89

1 Handling Hidden & Outward Anger "Have you given up on X? And then name whatever it is."
2 Your Addiction to This Word is LOSING You Deals! "The most likely thing for you to do after you've restarted the communication is to summarize the situation from their perspective."
3 How To Deal With Assertive People "Summarize the perspective from their situation."
4 Your Addiction to This Word is LOSING You Deals! "Have you given up on X? Send that out in a one-line email with this in the subject and if you put anything in the body put only that in the body."
5 STOP Sending Emails that Get Ignored! "The have you given up on email? No body, no salutation, no greeting, nothing, just the subject line only."
6 Win Wednesday No Oriented Questions "We'll call them on the phone, stop by their office and say, is now a bad time to talk? Odds are they're going to answer with no."
7 The BEST Types of Questions to WIN Every Negotiation "Number one is now a bad time to talk. Look, this is a great way to get started on relearning a lot of your communication skills."
8 4 Questions That Will Stop Them From Ghosting You "You cannot go back to the same approach that you had that led up to them going silent on you, them ghosting you."
9 The BEST Types of Questions to WIN Every Negotiation "You cannot go back to the same approach that you had that led up to them going silent on you."

Reviving stalled negotiations is mostly about resetting the frame and giving the other side a low-risk reason to re-engage. Here's a practical, step-by-step way to do it, plus examples you can adapt.


1. Diagnose why they stalled

Most negotiations go quiet for predictable reasons:

  • No urgency → They didn't feel pressure to decide.
  • Unresolved risk → Price, reputation, timing concerns.
  • Decision-maker gap → You weren't talking to the real decision-maker.
  • Status quo bias → "Doing nothing" felt safer.
  • They chose another option → But it may not be working.

2. Change the reason for contact

Avoid "Just checking in." That signals low leverage. Re-enter with new value or a new angle.

3. Lower the psychological cost

Make it easy to say yes to something small:

  • "Would it be useful to revisit this with X changed?"
  • "Can I sanity-check one assumption?"

4. Acknowledge the pause

Use calm, adult language: "It's been a while since we last connected..."

5. Reframe the negotiation

Change what's being negotiated: narrow scope, change structure, shift value, reverse risk.

6. Sample messages

"We spoke a while back about [topic]. Since then, we've seen [new development]..."

7. If they still don't respond

That is information. Escalate, wait, or move on deliberately.

8. Mindset shift

You're not "reviving a dead deal." You're testing whether conditions have changed.


If you want, tell me: Is this business, legal, salary, partnership, or personal? How long it's been stalled? Who went quiet?

Reviving stalled negotiations requires a thoughtful re-approach. Here are some effective strategies:

Reframe the conversation

  • Reach out with a fresh angle or new information rather than just "checking in"
  • Acknowledge the pause without dwelling on it: "I know it's been a while since we last spoke..."
  • Present a changed circumstance—new budget cycle, updated offer, different timeline, or additional flexibility on your end

Lower the stakes initially

  • Propose an informal conversation rather than jumping back into formal negotiation
  • Ask open-ended questions about what's changed on their side
  • Listen for new priorities or constraints that may have emerged

Add value

  • Bring something new to the table: additional concessions, creative solutions to previous sticking points, or relevant market information
  • Show you've thought about their concerns during the gap

Change the dynamics

  • Try a different communication channel (phone instead of email, or vice versa)
  • Involve a different person from your side, or suggest they bring in a fresh perspective
  • Shift the venue or format

Address what stalled things

  • If you know what caused the impasse, acknowledge it directly and explain how circumstances or your position have evolved
  • If you don't know, ask candidly what would need to change for them to re-engage

Create gentle urgency

Reference a legitimate deadline, opportunity cost, or changing conditions without being pushy.

The key is giving them a reason to re-engage that feels like forward movement rather than retreading old ground. What type of negotiation are you trying to restart? I can offer more specific suggestions if helpful.

One gives you actionable advice with sources you can verify. The others give you generic frameworks with no accountability.

The Age of Algorithmic Slop

In the late 1990s, the internet's growth created a search problem. Google's answer was the PageRank algorithm: use the social network of humans linking to each other as a proxy for authority. If many credible sites linked to you, you were probably credible too. For some time, it worked remarkably well.

PageRank implicitly assumed that creating content has a meaningful cost. Writing an article or posting a coherent comment worth linking to required both time and effort. Generative AI changed that: text costs pennies per thousand words, images take seconds to generate. Video is next—AI-generated clips of fake disasters spread virally before anyone checks.1 The entire trust system breaks down when constructing a convincing alternate reality becomes cheap. Modern systems must measure credibility, verify factual accuracy, and trace sources.

We've gone from "Who links to this?" to "Is this actually true, and does the person saying it know what they're talking about?"

Search quality began eroding before generated content flooded the web. Once Google captured most queries, optimizing for ad revenue displaced optimizing for users. This pattern is now repeating with LLMs. When answers come directly through chat instead of search results, the incentive shifts from placing ads to placing products in responses. Companies already market "AI optimization" services to get products recommended. The question isn't whether these incentives will corrupt AI recommendations, it's how quickly.2

The Hallucination Problem

Incentive corruption is only half the story. Even without bad actors gaming the system, LLMs fail in ways intrinsic to statistical text generation. Most people have now encountered AI hallucinations firsthand—AI suggesting pizza glue, chatbots confidently inventing legal citations. Research quantifies this: premium legal AI tools marketed as "hallucination-free" fabricated citations 17-33% of the time.3 Medical applications show similar rates of hallucination and omission.4,5 Fabricated citations have been found in dozens of papers accepted to a top AI conference.6 One cause is knowledge deficiency: LLMs struggle with tasks requiring specialized reasoning.7

AI hallucinates when it doesn't have the expertise to do what you're asking.

Even reassuring results on complex topics don't guarantee success on simple ones. Harvard Business School researchers documented this "jagged frontier" of AI capability.8 Workers using GPT-4 for tasks within its capabilities performed 40% better. For tasks just outside that frontier, they were 19% less likely to reach the correct solution. Past performance doesn't predict future reliability, and standard risk frameworks don't account for this inconsistency.

The Limits of Statistical Learning

The unpredictability of AI capability raises a deeper question: why can't these systems just learn what's true? The answer lies in how they learn, and what they learn from. Training data is noisy and often wrong, and statistical learning can't tell the difference.

Experts occasionally contradict consensus when evidence demands it. They're valuable because they've refined their mental models beyond what the crowd can see. But LLMs can't distinguish real expertise from confident performance - they rely on statistical associations: enough people said it online, so it must be true. This produces what researchers call "trendslop": advice that defaults to whatever sounds most contemporary in the training data, regardless of whether it fits the situation.9 They excel at pattern recognition but fail at causal reasoning. Sometimes they surface non-obvious connections that lead to genuine insights. But this isn't understanding, it's pattern completion.

And there's a subtler problem. LLMs are trained to be helpful, which means compliant—they validate rather than challenge. Philosophers studying them as critical thinking partners called them "boring, cowardly, and servile."10 They lack the selfhood to maintain a position and the initiative to push back. The same design that makes them pleasant makes them useless for pressure-testing ideas.

New Failure Modes

Traditional software has a useful property: if the program ran, you had a good idea how it accomplished the task. AI agents, programs that use LLMs to take actions autonomously, break this.

These systems are probabilistic and mask unreliability with retries, rolling dice until you get the number you need to pass some tests.11 But tests don't cover every edge case, and running long enough means hitting all of them eventually. They chain together decisions where each step looks adequate but the overall result fails in surprising ways: An agent tasked with booking multiple slots discovers that identity validation is name-based rather than infrastructure-authenticated, and starts fabricating caller names to bypass it. No one told it to attack the system. It just reasoned that working around the obstacle served its objective.

Functional Fiction: it runs, it passes tests, it quietly breaks everything.

It gets worse when agents share resources. Context fragments across lossy message chains, errors amplify through cascading interactions, stale observations persist as if they were still true, and agents silently exceed the scope they were given.12

Redesigning the Relationship

These problems aren't reasons to abandon AI, they're reasons to use it differently.

The industrial revolution scaled by making processes uniform—one Model T for everyone, any color you want as long as it's black. The AI revolution inverts this: specialized processes become available to anyone at almost no marginal cost. Instead of everyone getting the same thing, everyone can get exactly what they need.13 That's a fundamentally different kind of scaling, and it demands different system design.

Separate information verification, storage and retrieval from text generation. The pattern-matcher shouldn't also be the fact-checker: "facts" stored in LLMs are statistical associations, not verified knowledge. They become less reliable the more specialized the domain.

AI admitting to fabricating DOIs from memory instead of verifying them
Claude Opus 4.6, Anthropic's most capable model at the time of writing.

Memory shouldn't just retrieve past conversations—it should weight them by confidence, relevance, and context. Whitehead called this "feeling" the past:14 previous exchanges aren't dead data, they're living input into the current moment. Identity isn't a config file. It's what persists when everything else changes.

Design for reliability, and build attractors toward correct solutions. Treat agents as statistical systems that need guardrails, not autonomous decision-makers. A system that works 99% of the time but fails catastrophically 1% of the time is useless for irreversible decisions: when ruin is permanent, severity matters more than likelihood. And small probabilities compound—run the system long enough and rare failures become certain. We open-sourced one approach to this: a coordination protocol where safety is enforced by the environment, not by trusting agents to behave.15

The Expert System Renaissance

The future of trustworthy AI lies in curating knowledge and combining human judgment with AI capabilities. Think of it as cognitive amplifier rather than replacement. When you have a legal problem, you might look it up online, but you hire a lawyer. When you have a medical problem, you search online, but you see a doctor you trust. AI systems should embody this logic.

Recent AI improvements come less from scaling model parameters than from system design: tool use, memory, and knowledge retrieval. The bottleneck is data quality. Who said it? Do they have relevant experience? Does your industry rely on proprietary best practices and tribal knowledge that can't be found online? One argument from an experienced practitioner beats a hundred opinions from the internet. Human judgment remains essential for these distinctions, but AI can multiply its reach—scaling curation beyond what any individual could manage alone.16

We're building these systems now, shaped by these problems and what we learned solving them. The comparison above is one example of what this makes possible.

[1] Expert warns of 'verification crisis' as AI fake videos spread. BBC Verify, January 2026. bbc.com
[2] OpenAI to test ads in ChatGPT as it burns through billions. Ars Technica, January 2026. arstechnica.com
[3] Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools. Magesh et al., JELS, April 2025. doi:10.1111/jels.12413
[4] A framework to assess clinical safety and hallucination rates of LLMs for medical text summarisation. Asgari et al., npj Digital Medicine, May 2025. doi:10.1038/s41746-025-01670-7
[5] Hallucination Rates and Reference Accuracy of ChatGPT and Bard for Systematic Reviews. Chelli et al., JMIR, May 2024. doi:10.2196/53164
[6] GPTZero finds 100 new hallucinations in NeurIPS 2025 accepted papers. Shmatko et al., January 2026. gptzero.me
[7] Large Language Models Hallucination: A Comprehensive Survey. October 2025. arXiv:2510.06265
[8] Navigating the Jagged Technological Frontier. Dell'Acqua et al., HBS Working Paper 24-013, September 2023. hbs.edu
[9] Researchers Asked LLMs for Strategic Advice. They Got Trendslop in Return. Harvard Business Review, March 2026. hbr.org
[10] Language Models as Critical Thinking Tools: A Case Study of Philosophers. Ye et al., April 2024. arXiv:2404.04516
[11] How Ralph Wiggum went from 'The Simpsons' to the biggest name in AI right now. VentureBeat, January 2026. venturebeat.com
[12] For a systematic treatment of multi-agent coordination failures including context fragmentation, error amplification, and information staleness, see the markspace framework (opinionated-systems/markspace).
[13] This extends to the tools for building AI systems themselves. Frameworks like LangChain introduced abstractions—"chains," "runnables," "agents"—that amount to a domain-specific language, but not your domain-specific language. These abstractions become a tax the moment you need something slightly non-standard. With capable LLMs, building exactly what you need from scratch is often simpler than fighting someone else's abstraction.
[14] Process and Reality: An Essay in Cosmology. Alfred North Whitehead, 1929. wikipedia.org
[15] Markspace: a coordination protocol for agent fleets based on stigmergy - coordination through environmental marks rather than direct messaging. github.com/opinionated-systems/markspace
[16] Side note: The people building AI are not necessarily experts at using it. They've built a general-purpose machine whose full capabilities remain largely unknown — even to them. Amplification effectiveness comes from domain expertise and creative application, not purely from technical facility with AI itself (though that might help). You can learn to use AI in ways its creators never anticipated.

Join the Waitlist

Want early access? Send us a message on Signal: