Saturday, December 13, 2025

Eat, Don’t Talk: Why You Did Not Win That Thanksgiving Argument

An excellent article (transcript, actually) on why arguing over politics in this day and age goes nowhere.  You are not changing minds, you are hardening positions.

There was a time when political minds changed over coffee and grew cold during long conversations. In living rooms where neighbors lingered after dinner parties, on front porches where strangers became friends through the slow accumulation of shared evenings. These weren’t spaces designed for political persuasion. They were simply the ordinary architecture of human connection, where ideas shifted not through debate, but through the quiet witness of how other people live their lives. That world is vanishing. We conduct our most intimate conversations through screens now. Swipe our way through potential connections and mistake the performance of our digital selves for genuine relationship. The coffee shops have Wi-Fi passwords instead of lingering conversations. The front porches have been replaced by Ring doorbells. Even our protests happen as much online as in the streets."

https://whowhatwhy.org/podcast/eat-dont-talk-why-you-did-not-win-that-thanksgiving-argument/  

Saturday, November 08, 2025

Meet Kimi Linear: Faster long-context AI that uses less memory and beats full attention


There’s a new article on making large AI models faster and more memory-efficient. The original abstract is written for specialists, which is perfect for readers deep in the field but tougher for everyone else. To make it easier to engage with the ideas, I’m sharing the original abstract and three friendlier rewrites using a skiing analogy.
Pick your slope: the black link goes to the original, the blue version is for people who know LLMs but don’t track research details, and the green version is for well-educated novices who use AI but don’t speak the jargon. If helpful, I can add a bunny hill version for absolute newcomers.

Green Slope 

Kimi Linear is a new way to build large AI models that is faster and more memory-efficient than today’s standard “full attention” models, while also being more accurate in our tests. It works well on short and very long inputs, and in reinforcement learning settings.

At the core is Kimi Delta Attention (KDA), a compact attention module that treats part of the model like a small working memory. A finer set of gates helps the model decide what to keep and what to forget, so it uses that memory more effectively. We also process text in manageable chunks and use a lightweight math trick that cuts computation without changing what the model learns.

We trained a hybrid model with 3B active parameters (48B total) that mixes KDA with Multi-Head Latent Attention. Using the same training setup, this model beat a full-attention baseline on every task we checked. It used up to 75% less key-value cache memory (the short-term memory used during generation) and reached up to 6× faster decoding on 1-million-token inputs. In practice, you can swap Kimi Linear in for full attention models and get better accuracy and efficiency, especially on long inputs and outputs.

We are releasing the KDA kernel, vLLM integrations, and both pretrained and instruction-tuned checkpoints for others to use.

Blue Slope

We introduce Kimi Linear, a hybrid linear attention architecture that, for the first time, outperforms full attention under fair comparisons across various scenarios -- including short-context, long-context, and reinforcement learning (RL) scaling regimes. At its core lies Kimi Delta Attention (KDA), an expressive linear attention module that extends Gated DeltaNet with a finer-grained gating mechanism, enabling more effective use of limited finite-state RNN memory. Our bespoke chunkwise algorithm achieves high hardware efficiency through a specialized variant of the Diagonal-Plus-Low-Rank (DPLR) transition matrices, which substantially reduces computation compared to the general DPLR formulation while remaining more consistent with the classical delta rule.

We pretrain a Kimi Linear model with 3B activated parameters and 48B total parameters, based on a layerwise hybrid of KDA and Multi-Head Latent Attention (MLA). Our experiments show that with an identical training recipe, Kimi Linear outperforms full MLA with a sizeable margin across all evaluated tasks, while reducing KV cache usage by up to 75% and achieving up to 6 times decoding throughput for a 1M context. These results demonstrate that Kimi Linear can be a drop-in replacement for full attention architectures with superior performance and efficiency, including tasks with longer input and output lengths.
To support further research, we open-source the KDA kernel and vLLM implementations, and release the pre-trained and instruction-tuned model checkpoints.

Black Slope


Monday, June 23, 2025

Getting Things Done as a System: Becoming Like Water


"Water is what it is, and does what it does. It can overwhelm, but it’s not overwhelmed. It can be still, but it is not impatient. It can be forced to change course, but it is not frustrated. Get it?" 

– David Allen, Getting Things Done



INPUTS (Uncontrollable, Constant Flow)

  • Emails
  • Meetings
  • Deadlines
  • Crises
  • Unexpected events

Step 1: Capture ("Container for Flow")

  • Collect all incoming information (notes, inboxes, voice memos, lists)
  • Don't resist the incoming current, just catch it

Step 2: Clarify ("Remove Debris")

  • Process each captured item: What is it? Is action needed?
  • Define next actions, projects, reference material
  • Let go of ambiguity (clearing the water)

Step 3: Organize ("Directing Channels")

  • Sort clarified items into systems:

    • Next actions
    • Waiting for
    • Projects
    • Someday/maybe
    • Calendar
  • Each item flows into an appropriate channel

Step 4: Reflect ("Monitoring Currents")

  • Weekly reviews
  • Reconnect with priorities
  • Adjust the system as needed

Step 5: Engage ("Act With Flow")

  • Work from trusted lists
  • Choose actions based on context, time, energy, priority
  • Stay present and responsive, like water flowing around obstacles


Core System Qualities ("Water-Like Mindset")

  • Adaptable
  • Non-resistant
  • Emotionally neutral
  • Always moving, never stuck
  • Calm even when fast-moving
  • Changes form without losing essence


Mindset Shift

Instead of fighting complexity or feeling overwhelmed:

  • Trust the system to hold the complexity.
  • Allow your mind to stay clear and present.
  • Flow with change instead of resisting it.

"Water is what it is, and does what it does."


Systems Thinking Layer:

Your mind + GTD system = an adaptive human-technical system operating inside larger dynamic systems (work, family, world events).

  • Feedback loops: Regular reviews keep the system calibrated
  • Emergence: Complex projects unfold through iterative small actions
  • Path dependence: Early clear captures reduce downstream chaos


Mindfulness Layer:

  • Presence without reactivity
  • Awareness of flow and obstacles without attachment
  • Acceptance of what arises, acting skillfully in response


David Allen's implicit lesson:

"You cannot control the river. But you can learn to move like water."

Saturday, June 21, 2025

Fascism as Systems Failure

 This post is based on this quote from Robert Paxton's Book, The Anatomy of Fascism 

“They expected that inevitable war would allow the master races, united and self-confident, to prevail, while the divided, “mongrelized,” and irresolute peoples would become their handmaidens. Fascism had become conceivable, as we will soon see, because it offered a new way of responding to the anxieties of an age of mass politics, mass mobilization, and acute social tension.” 

– Robert O. Paxton, The Anatomy of Fascism

First, Paxton’s key insight

Let's first look at Paxton's key insight.

Fascism is not simply an ideology. It’s a mobilizing process - it emerges out of systemic breakdowns, unresolved tensions, and the failures of existing institutions to adapt to accelerating social, political, and economic complexity.

In systems language

  • Fascism arises when a system’s existing governance structures, narratives, and feedback mechanisms lose their capacity to absorb growing tensions.
  • It is a form of path-dependent systemic response to perceived loss of control, identity, or coherence in the face of destabilizing forces.

The system conditions Paxton describes are classic complex adaptive system stressors:

  • Rapid social change (modernization, urbanization, mass politics)
  • Shifting power dynamics (loss of imperial power, decline of old elites)
  • Economic instability (global depression, unemployment, inflation)
  • Mass disorientation (loss of cultural anchors, new media environments)
These are nonlinear, interacting stressors - not unlike the kinds of "polycrisis" or "tipping points" we talk about in contemporary systems change.

Fascism as an emergent attractor:

In systems terms, fascism operates as an emergent attractor that offers:
  • Simple narratives that resolve complexity into “us vs. them” binaries.
  • Restored identity (purity, unity, strength) for disoriented populations.
  • Rapid action (often violent or extralegal) that bypasses paralyzed institutions.
  • A centralizing control structure that promises to stabilize perceived chaos.
Systems under great stress often seek lower-complexity attractors - simplifications that provide temporary homeostasis. Fascism exploits this dynamic brutally.


Why this matters for systems change practice:

  • Systems change isn’t always progressive. Change processes can produce regressive attractors when people’s legitimate anxieties are hijacked by actors offering oversimplified solutions.
  • Legitimacy vacuums invite dangerous alternatives. When formal institutions fail to evolve fast enough, informal and extra-institutional movements may fill the vacuum.
  • Narrative control is central. Fascist movements masterfully reframed systemic grievances into identity-based, zero-sum narratives. This is why systems change practitioners increasingly recognize the importance of collective sense-making and narrative emergence in shifting systems toward more inclusive, adaptive futures.
  • Early signals matter. Paxton emphasizes how fascism initially operates “within the system” before fully seizing power - exploiting democratic weaknesses before destroying democracy. Systems change work often focuses on early feedback signals that show whether adaptation is building resilience or breaking down.

Direct link to democratic systems change:

Democratic systems change practitioners today work precisely at the fault lines where fascism historically gained ground:
  • Polarization
  • Distrust in institutions
  • Declining civic capacity for complexity
  • Fragmentation of collective identity
  • Erosion of common facts
The core challenge is helping societies maintain adaptive capacity in the face of complexity - rather than falling into the low-complexity attractors of authoritarianism.

Put simply

Fascism is what happens when systems fail to manage complexity with adaptive, inclusive, participatory change - and instead shift to autocratic simplifications that promise certainty, purity, and control.

Structure of Causal Loop Diagram: Fascism as Systems Failure

Here is the structure of a basic causal loop diagram representing "Fascism as Systems Failure":


Fascism as Systems Failure: Causal Loops

Core Feedback Loops

Complexity-Stress Loop (Reinforcing)

  • Rapid Social Change (+)
  • Institutional Capacity (-)
  • Social Disorientation (+)
  • Anxiety & Fear (+)
  • Demand for Simple Narratives (+)
  • Vulnerability to Authoritarian Movements (+)

Explanation: Rapid modernization, economic shifts, and social change outpace institutional adaptation, fueling public anxiety and making simplified explanations appealing.

Legitimacy-Erosion Loop (Reinforcing)

  • Institutional Failure (+)
  • Public Distrust (+)
  • Weakening of Democratic Norms (+)
  • Elite Fragmentation (+)
  • Openings for Demagogues (+)
  • Alternative Power Structures (+)
  • Further Institutional Failure (+)

Explanation: As institutions fail to address growing complexity, trust erodes, elites splinter, and non-democratic actors gain influence, further weakening institutional legitimacy.

Identity-Threat Loop (Reinforcing)

  • Cultural Mixing / Migration (+)
  • Perceived Identity Threat (+)
  • Nationalist Identity Narratives (+)
  • In-Group Solidarity (+)
  • Out-Group Blame (+)
  • Political Polarization (+)
  • Identity Threat (+)

Explanation: Social diversity and cultural change activate identity-based fears, which are exploited by fascist narratives framing diversity as existential threat.

Order-Restoration Loop (Reinforcing)

  • Fear of Chaos (+)
  • Desire for Strong Leadership (+)
  • Support for Authoritarian Solutions (+)
  • Centralized Power (+)
  • Suppression of Dissent (+)
  • Temporary Stability (+)
  • Long-Term System Fragility (+)

Explanation: As fear grows, people support leaders who promise stability through strong central control, but these solutions create brittle systems that suppress adaptive capacity.

Key Insight for Systems Change:

Fascism emerges not as an isolated ideology, but as a systemic attractor in the context of governance failure, complexity mismanagement, narrative control breakdowns, and identity threat amplification. Effective systems change must strengthen adaptive capacity, narrative pluralism, and inclusive governance to prevent these reinforcing loops from locking in.



Systems: Emergent Attractors

In systems thinking, an attractor is a kind of “preferred pattern” that a complex system tends to settle into over time. Imagine dropping a marble onto a landscape of hills and valleys — the marble may roll around for a while, but eventually it will settle into one of the valleys. That valley is like an attractor: once the system is there, it tends to stay there unless something significant knocks it out. The same idea applies to social systems, economies, ecosystems, or political movements — certain patterns of behavior, relationships, and feedback reinforce themselves and become stable over time.

Not all attractors are equally healthy or desirable. Some attractors produce stable democracies, functioning markets, or resilient communities. Others lead to destructive outcomes, like authoritarian regimes, cycles of poverty, or ecological collapse. What makes systems change so challenging is that once a system has settled into a particular attractor, it resists change — small reforms may slide right back into the same old patterns. Moving a system to a new attractor usually requires shifting multiple elements at once: narratives, incentives, power structures, and feedback loops.

The idea of attractors helps us see why complex problems don’t always respond to linear solutions. Instead of asking “what’s the fix?”, systems thinking asks “what keeps pulling the system into this pattern — and how do we reshape the deeper forces so that healthier patterns can emerge and sustain themselves?”


In complex social systems, stability often takes the form of attractors — self-reinforcing patterns of behavior and governance that a society gravitates toward. For example, a functioning democracy may form a stable attractor where feedback loops support participation, accountability, and adaptive governance. However, mounting stresses — such as economic shocks, identity conflicts, or loss of institutional trust — can push the system beyond the stability of its democratic attractor. If key reinforcing loops break down, the system may shift abruptly toward a different stable state, such as authoritarianism or fascism, where feedback loops now reinforce centralized power, exclusion, and rigid control.

The shift between attractors often requires significant disruptions; small reforms may not be enough if the system remains locked into the original basin of attraction. Effective systems change seeks to strengthen the resilience of healthy attractors while identifying early signals of dangerous transitions.



Sunday, June 01, 2025

Growing up

 “Growing up,” she told me, “is learning to stop believing people’s words about you.”

Lulu Miller, Why Fish Don’t Exist

There comes a quiet shift in adulthood — not just the gaining of responsibilities, but the gradual unlearning of the stories others have told us about who we are. Their labels, judgments, even their praise — all of it forms a shell that isn’t always ours to carry. Growing up, in the truest sense, might be the moment we realize we are not the sum of others’ perceptions, but something far more fluid, complex, and unfinished.
We begin to rewrite the narrative from the inside out.

Monday, May 12, 2025

Yesterday's video

Writing is thinking - that is true without a doubt. But writing for reading is a whole different ball game.  

https://www.youtube.com/watch?v=3_lTELjWqdY


I watched this video yesterday and found it fascinating and informative. Yes, I was not on full agreement on all points. But it was nonetheless worthy of my time.

Sunday, May 11, 2025

How AI Is Helping Us Understand Complex Systems—Not Just Predict Them

We often think of AI as a tool to predict the future—like guessing the weather, stock prices, or whether someone might get sick. But AI is starting to do something even more powerful: helping us understand the rules behind how complex systems work.

A recent issue of Complexity Thoughts explores this shift, showing how new AI methods are uncovering the hidden patterns behind things like disease spread, traffic flow, brain activity, and more. The goal isn't just to know what will happen—but to figure out why.



From Forecasting to Figuring Things Out

Most AI tools today are built to spot patterns and make forecasts. But these new approaches aim to find the actual equationsthe basic rules that explain how a system behaves over time.

That’s a big leap. It means AI isn’t just guessing anymore—it’s helping build scientific models.

Why Simpler Models Are Better

Many of these studies use a method called sparse modeling. Instead of creating big, complicated equations, these models look for the smallest number of pieces needed to explain what’s going on.

Why? Because most systems—even complex ones—are driven by just a few key factors. If we can find those, we get models that are easier to understand and work with.

This approach is already being used to:

  • Study how fluids flow,
  • Track how diseases spread,
  • Understand patterns in brain signals,
  • And model chaotic systems like weather patterns.


Finding the Right Way to Look at a System

Sometimes, raw data is messy or overwhelming—like thousands of brain signals or climate measurements. Even with powerful tools, it’s hard to see what matters.

One AI method solves this by first learning the best way to describe the system, and then figuring out the rules. It’s like teaching a computer to choose the right map before trying to navigate a city.

A Machine That Thinks Like a Scientist

Another team built what they call a Bayesian machine scientist. Instead of trying one model, it tries out many different ones, tests how well they match the data, and picks the best. It even learns from a large library of past equations, the way a human scientist might rely on years of experience.

When Randomness Is Part of the System

Some systems—like bird flocks or the brain—are naturally unpredictable. They have a lot of randomness built in. Instead of treating that randomness as noise, a new method called a Langevin Graph Network includes it in the model.

This has already led to real discoveries:

  • Showing how birds flock using rules scientists have long suspected.
  • Modeling how harmful brain proteins spread—something important for Alzheimer’s research.

Why This Is a Big Deal

Together, these projects show a big shift in how we use AI:

  • Not just to automate tasks, but to help us discover how the world works.
  • Giving us simple, understandable models we can use to guide action.
  • Making science faster, more open, and easier to explain.

In a world dealing with complex challenges—like climate change, pandemics, and social disruption—this kind of AI could help us not only respond faster, but understand better.

Want to explore more? Check out Complexity Thoughts for links and summaries of these fascinating papers.