Mental Wealth Academy is an Academic Research Org. Focused On Human-To-Computer-Interaction and Data Analytics. Neuroscience & Education.


Subscribe to Mental Wealth Academy

>100 subscribers
>100 subscribers
Most companies have two structures. One is visible: the institute, the job titles, the people. The other operates behind the scenes: it’s the network of informal power that determines who gets heard and which problems are allowed to exist.
Picture a school board meeting. Twenty-three people sit in a fluorescent-lit room, most checking phones, as a superintendent presents a 40-page PowerPoint on curriculum updates. Maybe two students speak during, the rest have no idea this exists. Then a vote happens, people raises their hands, and the decision is made and finalized.
The concept of a Shadow System (Stacey, 1997) can be used to describe the informal network of relations within the organisations that are evident in casual hallway conversations, along the grapevine, through the rumour mill and in the informal procedures for getting things done.
Which is the problem that's led us towards building an AI network for education to transcend these inherent limitations and unlock new forms of collective governance.
But in order to understand where we're going, we must first confront the past.
Enlightenment is less about accepting information and more about the tedious and intellectually demanding process of methodologically sanitizing it for communication.
When Galileo attempted to prove the heliocentric theory. He was fighting against powers much stronger, the invisible Catholic Church, and the collective groupthink of a society raised with the idea that God put Earth at the center of the universe.
Galileo being right, wasn't just about being right. There was a potential domino effect that would cause many more questions, sow distrust in a time where the church was powerful, and thus... Galileo was placed on house arrest for life.

Today's resistance isn't to heliocentrism — but it does involve going against popular groupthink similar to the people in Galileo's time. Letting AI assist human decision-making is a powerful tool, that's buried in the invisible power of computer science and technology, but much of the people online today are against the tool.
But first, how does normal governance without AI look, and what's the problem?
Pennsylvania's recent anti-bias lawsuit shows this tension between groupthink and terminology from differing parties. Pennsylvania recently settled a lawsuit over teacher anti-bias guidelines, which replace its 2022 framework that focused on cultural awareness and examining racial biases. The original guidelines required teachers to reflect on their cultural lens, address systemic bias, and learn about microaggressions.
This is about DEI, Diversity Education, & Inclusion.
The amount of years that freedom for Black Americans has existed. Is only a quarter (25%) of the total years that slavery and segregation have existed. DEI is a program that was implemented to balance that systemic fault.
However, it became extremely unpopular in the media. Like all trends; fashion, songs, ideas... once they are taken over in inauthentic ways, from brands, businesses. etc. the public finds it distasteful. It's usually less about Black Justice, or LGBTQ, and more on the uncanny push to wrap what's trending as a costume. Big businesses swapping to rainbow themes for a month, BLM protestors feeling more like a chance to do bad than to educate and rally around what's good. It's like people forgot the game we're in.
This introduces skepticism in its effectiveness to the other debating side. In this case, conservatives who felt as though it wasn't a good thing to try in the past.
“When stuff happens in Pennsylvania, it tends to ripple through the rest of the country. This is not a minor deal, this is a pretty big deal.” – Jonathan Plucker. Education policy professor at Johns Hopkins’ school of education.
Districts aren't required to use DEI framework, nothing stops them from keeping the original guidelines, but the impact and decisions are often muddied by emotion.
This is where Agentic Governance will add the perfect mechanical middle ground.
You'll see that, AI is resolutely committed to partiality. It is not always correct, but through the RAG process and fine-tuning on small amounts of data, we actually see increases in accuracy and scoring from the 80% levels to the high 90% accuracy.
The machine is the ultimate centrist. Language that changes its meaning within the context of a conversation, the machine is not emotionally swayed by biased.

An agentic governance system changes this and satisfy legal requirements while preserving educational intent. These agents trade context and proposals—Agent A's linguistic history for Agent B's legal analysis—building a comprehensive map no single human could maintain in working memory. Board members then vote with the agents using their owned tokens versus static digital voting.

In a more advanced system, the core agent positions itself within the center of the overton window, shaped on company policy, documents, and an agreed upon nexus of truth in the organization. Sub-agents, then barter for trade across the line for their respective sides to escape the enframement problem, developing precision-grade quality of pragmatic trade of ideas without emotion.

A common rebuttal for agents in decision-making on a governmental level.
Academic Case Studies on LLM Models are quickly outdated. Just look at the facts:
Models in the past 3 years of the infantile birth of AI have had several upgrades per year, with each one being an evolutionary step in quality, reinforced learning, and benchmark improvements across the board in code, math, and accuracy. Negative absolutist takes against AI have notoriously either been from doomsday tech giants, or civilians reaction.
A. Fear-mongering, or B. a reactionary view generated from emotionally charged social media, heavily biased via status quo. When journalism is generated from A, as words from tech giants create the incentives for journalists to make clickbait for engagement.
While some Case Studies are valid and create great statistics to look at from their test groups, most conclusions against LLMs and AI are just simply confounding.
We don't have the longitudinal data on AI that makes sense in any argument, not because we don't have the tools. But because it is impossible.
Only small traces of historical data have been available for human context in a digital format for us to utilize in training. There are still a lot of questions and answers missing, most old forms of record-keeping are lost, our data just doesn't reach that far back.

So when we read studies on "AI lowers cognitive output in humans" we also have to take into consideration how nuanced the Case Study is, how many people were in it? Were there any controlled variables? Were there any random variables? Did we consider the effects of models not being up to true performance-grade, and future upgrades?
If we visited the study again, we'd likely see different outputs due to two variables.
Improvements in Models.
Improvement in Humanity's understanding of the model.
Throughout history of technology as a tool, there has never been a time where journalism has been as abrasive as it is now.
When the cellphone was released you might've seen maybe a single advertisement on how the cellphone is bad, a single article on a cellphone malfunctioning or blowing up.
A single article on the cellphone reducing cognitive ability to chat with others due to a psychological barrier between real human-human contact and through a device.
With AI it's not just a new tool introduced, but rapidly proliferated journalism triggering and steering the popular thought of how and why we should feel about the tool.
Just view AI as a tool, not a monster taking our jobs. AI cannot take a job because it never had a job to give, it's not god-tier being, it's just a new kind of tool.
Drugs — Caffeine is the cleanest example. Consistent caffeine use down-regulates our brain's natural adenosine receptors. And nobody cares. Because the net trade is worth it — billions of people use caffeine daily to sustain focus, productivity, and output they wouldn't achieve "naturally." We don't run around saying humanity is cognitively degraded because we drink coffee. We built modern civilization on it.
Calculators reduced our ability to do long division by hand. We responded by teaching higher-order math earlier. Net: more people doing more complex math than ever before.
GPS navigation reduced our spatial memory for routes. Net: millions of people confidently navigate unfamiliar cities, take jobs farther from home, and travel independently in ways they never would have.
Writing itself — Socrates literally argued that writing would destroy memory. He was right about the mechanism (oral memorization declined) and completely wrong about the conclusion. Writing enabled civilization-scale knowledge transfer.
Automobiles reduced our walking endurance. Net: we restructured the entire world around expanded mobility.
Eyeglasses reduced the eye's need to strain and compensate. Net: billions of people can see and function.

Yes, it is true that we can store several times more information than wikipedia, Approx. 2.5 Petabytes of data. the brain is only aware of a context of 40 bits.
Agents on the other hand can multi-task without their IQ or cognitive ability dropping instantly... this doesn't humans or disrupt our purpose as tool-beings, it just means we build the system better.
Use this to steer the higher-intelligence machine to create better.
Humanity is already in it's cyborg phase, and becoming more machine. This is not a flaw or speculation, it's an evolutionary marvel. We are growing into beings capable of managing artificial brains that hold more context, less bias, and zero emotional flaws.
Through the use of this tool, we are able to efficiently manage tasks with more speed, precision, and competence than ever before... if we use them correctly.
Pennsylvania State System of Higher Education (PASSHE) Board of Governors consists of 20 members. The board includes appointed members by the governor, students, and legislators, responsible for overseeing the system's management, tuition, and policies.
State Board of Education: A separate, 21-member body that includes a Council of Higher Education (10 members) and a Council of Basic Education (10 members).
Responsibilities: The PASSHE Board of Governors appoints the chancellor, university presidents, and sets tuition.
Using agentic reasoning and thinking models, these positioning could be shifted, for a more diverse context amongst data from all wealth classes, instead of speculation.
These digital governance systems don't simply replicate existing voting structures online; they fundamentally transform how educational decisions emerge from collective intelligence.
In a traditional board meeting, superintendent presents 47 slides. Members ask questions, sometimes the same question three times. Votes. Meeting minutes record the outcome and tallied votes 50-14, but not the reasoning. Three years later when a new board reconsiders the policy, there's no institutional memory of why the original decision was made without first digging through the archive of the documents in plaintext.
DAO structure: Proposal submitted as a blockchain transaction with supporting documentation. Each member reviews and submits their vote with attached reasoning — both permanently recorded on-chain. AI agents analyze reasoning for logical inconsistencies, flag potential biases based on past voting patterns, and surface relevant precedent from previous decisions. The vote AND the complete decision rationale become permanent public record that future boards can learn from.

The Tripartite Model works like this: When a complex proposal needs specialized expertise, the system randomly selects one-third of our community to vote. Why random? Because hand-picked expert panels tend to reinforce existing biases—the same people get selected, forming informal power blocs. Random selection (validated in jury systems, sortition governance, and statistical sampling) prevents this. Each third brings different expertise and blind spots. Over multiple decisions, everyone participates, but no single group can dominate. AI agents track voting patterns to ensure the randomization doesn't accidentally create new biases over time.

The Foundational Level of the Academy focuses on creating and executing common proposals. By requiring majority consensus and involving over 20 members in decision-making, it combats knowledge degradation through collective collaboration and shared accountability.

The Academy DAO operates through the same principles: proposals submitted on-chain with reasoning, AI agents surface precedent and inconsistencies, votes become permanent learning records for future governance decisions.
This modern approach avoids inefficiencies, allows pragmatic trade of language, and empowers more individuals within the ecosystem by utilizing their input.
Let's update our worldview, and start working towards a better future.

Most companies have two structures. One is visible: the institute, the job titles, the people. The other operates behind the scenes: it’s the network of informal power that determines who gets heard and which problems are allowed to exist.
Picture a school board meeting. Twenty-three people sit in a fluorescent-lit room, most checking phones, as a superintendent presents a 40-page PowerPoint on curriculum updates. Maybe two students speak during, the rest have no idea this exists. Then a vote happens, people raises their hands, and the decision is made and finalized.
The concept of a Shadow System (Stacey, 1997) can be used to describe the informal network of relations within the organisations that are evident in casual hallway conversations, along the grapevine, through the rumour mill and in the informal procedures for getting things done.
Which is the problem that's led us towards building an AI network for education to transcend these inherent limitations and unlock new forms of collective governance.
But in order to understand where we're going, we must first confront the past.
Enlightenment is less about accepting information and more about the tedious and intellectually demanding process of methodologically sanitizing it for communication.
When Galileo attempted to prove the heliocentric theory. He was fighting against powers much stronger, the invisible Catholic Church, and the collective groupthink of a society raised with the idea that God put Earth at the center of the universe.
Galileo being right, wasn't just about being right. There was a potential domino effect that would cause many more questions, sow distrust in a time where the church was powerful, and thus... Galileo was placed on house arrest for life.

Today's resistance isn't to heliocentrism — but it does involve going against popular groupthink similar to the people in Galileo's time. Letting AI assist human decision-making is a powerful tool, that's buried in the invisible power of computer science and technology, but much of the people online today are against the tool.
But first, how does normal governance without AI look, and what's the problem?
Pennsylvania's recent anti-bias lawsuit shows this tension between groupthink and terminology from differing parties. Pennsylvania recently settled a lawsuit over teacher anti-bias guidelines, which replace its 2022 framework that focused on cultural awareness and examining racial biases. The original guidelines required teachers to reflect on their cultural lens, address systemic bias, and learn about microaggressions.
This is about DEI, Diversity Education, & Inclusion.
The amount of years that freedom for Black Americans has existed. Is only a quarter (25%) of the total years that slavery and segregation have existed. DEI is a program that was implemented to balance that systemic fault.
However, it became extremely unpopular in the media. Like all trends; fashion, songs, ideas... once they are taken over in inauthentic ways, from brands, businesses. etc. the public finds it distasteful. It's usually less about Black Justice, or LGBTQ, and more on the uncanny push to wrap what's trending as a costume. Big businesses swapping to rainbow themes for a month, BLM protestors feeling more like a chance to do bad than to educate and rally around what's good. It's like people forgot the game we're in.
This introduces skepticism in its effectiveness to the other debating side. In this case, conservatives who felt as though it wasn't a good thing to try in the past.
“When stuff happens in Pennsylvania, it tends to ripple through the rest of the country. This is not a minor deal, this is a pretty big deal.” – Jonathan Plucker. Education policy professor at Johns Hopkins’ school of education.
Districts aren't required to use DEI framework, nothing stops them from keeping the original guidelines, but the impact and decisions are often muddied by emotion.
This is where Agentic Governance will add the perfect mechanical middle ground.
You'll see that, AI is resolutely committed to partiality. It is not always correct, but through the RAG process and fine-tuning on small amounts of data, we actually see increases in accuracy and scoring from the 80% levels to the high 90% accuracy.
The machine is the ultimate centrist. Language that changes its meaning within the context of a conversation, the machine is not emotionally swayed by biased.

An agentic governance system changes this and satisfy legal requirements while preserving educational intent. These agents trade context and proposals—Agent A's linguistic history for Agent B's legal analysis—building a comprehensive map no single human could maintain in working memory. Board members then vote with the agents using their owned tokens versus static digital voting.

In a more advanced system, the core agent positions itself within the center of the overton window, shaped on company policy, documents, and an agreed upon nexus of truth in the organization. Sub-agents, then barter for trade across the line for their respective sides to escape the enframement problem, developing precision-grade quality of pragmatic trade of ideas without emotion.

A common rebuttal for agents in decision-making on a governmental level.
Academic Case Studies on LLM Models are quickly outdated. Just look at the facts:
Models in the past 3 years of the infantile birth of AI have had several upgrades per year, with each one being an evolutionary step in quality, reinforced learning, and benchmark improvements across the board in code, math, and accuracy. Negative absolutist takes against AI have notoriously either been from doomsday tech giants, or civilians reaction.
A. Fear-mongering, or B. a reactionary view generated from emotionally charged social media, heavily biased via status quo. When journalism is generated from A, as words from tech giants create the incentives for journalists to make clickbait for engagement.
While some Case Studies are valid and create great statistics to look at from their test groups, most conclusions against LLMs and AI are just simply confounding.
We don't have the longitudinal data on AI that makes sense in any argument, not because we don't have the tools. But because it is impossible.
Only small traces of historical data have been available for human context in a digital format for us to utilize in training. There are still a lot of questions and answers missing, most old forms of record-keeping are lost, our data just doesn't reach that far back.

So when we read studies on "AI lowers cognitive output in humans" we also have to take into consideration how nuanced the Case Study is, how many people were in it? Were there any controlled variables? Were there any random variables? Did we consider the effects of models not being up to true performance-grade, and future upgrades?
If we visited the study again, we'd likely see different outputs due to two variables.
Improvements in Models.
Improvement in Humanity's understanding of the model.
Throughout history of technology as a tool, there has never been a time where journalism has been as abrasive as it is now.
When the cellphone was released you might've seen maybe a single advertisement on how the cellphone is bad, a single article on a cellphone malfunctioning or blowing up.
A single article on the cellphone reducing cognitive ability to chat with others due to a psychological barrier between real human-human contact and through a device.
With AI it's not just a new tool introduced, but rapidly proliferated journalism triggering and steering the popular thought of how and why we should feel about the tool.
Just view AI as a tool, not a monster taking our jobs. AI cannot take a job because it never had a job to give, it's not god-tier being, it's just a new kind of tool.
Drugs — Caffeine is the cleanest example. Consistent caffeine use down-regulates our brain's natural adenosine receptors. And nobody cares. Because the net trade is worth it — billions of people use caffeine daily to sustain focus, productivity, and output they wouldn't achieve "naturally." We don't run around saying humanity is cognitively degraded because we drink coffee. We built modern civilization on it.
Calculators reduced our ability to do long division by hand. We responded by teaching higher-order math earlier. Net: more people doing more complex math than ever before.
GPS navigation reduced our spatial memory for routes. Net: millions of people confidently navigate unfamiliar cities, take jobs farther from home, and travel independently in ways they never would have.
Writing itself — Socrates literally argued that writing would destroy memory. He was right about the mechanism (oral memorization declined) and completely wrong about the conclusion. Writing enabled civilization-scale knowledge transfer.
Automobiles reduced our walking endurance. Net: we restructured the entire world around expanded mobility.
Eyeglasses reduced the eye's need to strain and compensate. Net: billions of people can see and function.

Yes, it is true that we can store several times more information than wikipedia, Approx. 2.5 Petabytes of data. the brain is only aware of a context of 40 bits.
Agents on the other hand can multi-task without their IQ or cognitive ability dropping instantly... this doesn't humans or disrupt our purpose as tool-beings, it just means we build the system better.
Use this to steer the higher-intelligence machine to create better.
Humanity is already in it's cyborg phase, and becoming more machine. This is not a flaw or speculation, it's an evolutionary marvel. We are growing into beings capable of managing artificial brains that hold more context, less bias, and zero emotional flaws.
Through the use of this tool, we are able to efficiently manage tasks with more speed, precision, and competence than ever before... if we use them correctly.
Pennsylvania State System of Higher Education (PASSHE) Board of Governors consists of 20 members. The board includes appointed members by the governor, students, and legislators, responsible for overseeing the system's management, tuition, and policies.
State Board of Education: A separate, 21-member body that includes a Council of Higher Education (10 members) and a Council of Basic Education (10 members).
Responsibilities: The PASSHE Board of Governors appoints the chancellor, university presidents, and sets tuition.
Using agentic reasoning and thinking models, these positioning could be shifted, for a more diverse context amongst data from all wealth classes, instead of speculation.
These digital governance systems don't simply replicate existing voting structures online; they fundamentally transform how educational decisions emerge from collective intelligence.
In a traditional board meeting, superintendent presents 47 slides. Members ask questions, sometimes the same question three times. Votes. Meeting minutes record the outcome and tallied votes 50-14, but not the reasoning. Three years later when a new board reconsiders the policy, there's no institutional memory of why the original decision was made without first digging through the archive of the documents in plaintext.
DAO structure: Proposal submitted as a blockchain transaction with supporting documentation. Each member reviews and submits their vote with attached reasoning — both permanently recorded on-chain. AI agents analyze reasoning for logical inconsistencies, flag potential biases based on past voting patterns, and surface relevant precedent from previous decisions. The vote AND the complete decision rationale become permanent public record that future boards can learn from.

The Tripartite Model works like this: When a complex proposal needs specialized expertise, the system randomly selects one-third of our community to vote. Why random? Because hand-picked expert panels tend to reinforce existing biases—the same people get selected, forming informal power blocs. Random selection (validated in jury systems, sortition governance, and statistical sampling) prevents this. Each third brings different expertise and blind spots. Over multiple decisions, everyone participates, but no single group can dominate. AI agents track voting patterns to ensure the randomization doesn't accidentally create new biases over time.

The Foundational Level of the Academy focuses on creating and executing common proposals. By requiring majority consensus and involving over 20 members in decision-making, it combats knowledge degradation through collective collaboration and shared accountability.

The Academy DAO operates through the same principles: proposals submitted on-chain with reasoning, AI agents surface precedent and inconsistencies, votes become permanent learning records for future governance decisions.
This modern approach avoids inefficiencies, allows pragmatic trade of language, and empowers more individuals within the ecosystem by utilizing their input.
Let's update our worldview, and start working towards a better future.

Share Dialog
Share Dialog
"AI becomes the government" is dystopian: it leads to slop when AI is weak, and is doom-maximizing once AI becomes strong. But AI used well can be empowering, and push the frontier of democratic / decentralized modes of governance. The core problem with democratic / decentralized modes of governance (including DAOs on ethereum) is limits to human attention: there are many thousands of decisions to make, involving many domains of expertise, and most people don't have the time or skill to be experts in even one, let alone all of them. The usual solution, delegation, is disempowering: it leads to a small group of delegates controlling decision-making while their supporters, after they hit the "delegate" button, have no influence at all. So what can we do? We use personal LLMs to solve the attention problem! Here are a few ideas: ## Personal governance agents If a governance mechanism depends on you to make a large number of decisions, a personal agent can perform all the necessary votes for you, based on preferences that it infers from your personal writing, conversation history, direct statements, etc. If the agent is (i) unsure how you would vote on an issue, and (ii) convinced the issue is important, then it should ask you directly, and give you all relevant context. ## Public conversation agents Making good decisions often cannot come from a linear process of taking people's views that are based only on their own information, and averaging them (even quadratically). There is a need for processes that aggregate many people's information, and then give each person (or their LLM) a chance to respond *based on that*. This includes: * Inferring and summarizing your own views and converting them into a format that can be shared publicly (and does not expose your private info) * Summarizing commonalities between people's inputs (expressed as words), similar to the various LLM+pol.is ideas ## Suggestion markets If a governance mechanism values "high-quality inputs" of any type (this could be proposals, or it could even be arguments), then you can have a prediction market, where anyone can submit an input, AIs can bet on a token representing that input, and if the mechanism "accepts" the input (either accepting the proposal, or accepting it as a "unit" of conversation that it then passes along to its participant), it pays out $X to the holders of the token. Note that this is basically the same as https://firefly.social/post/x/2017956762347835488 ## Decentralized governance with private information One of the biggest weaknesses of highly decentralized / democratic governance is that it does not work well when important decisions need to be made with secret information. Common situations: (i) the org engaging in adversarial conflicts or negotiations (ii) internal dispute resolution (iii) compensation / funding decisions. Typically, orgs solve this by appointing individuals who have great power to take on those tasks. But with multi-party computation (currently I've seen this done with TEEs; I would love to see at least the two-party case solved with garbled circuits https://vitalik.eth.limo/general/2020/03/21/garbled.html so we can get pure-cryptographic security guarantees for it), we could actually take many people's inputs into account to deal with these situations, without compromising privacy. Basically: you submit your personal LLM into a black box, the LLM sees private info, it makes a judgement based on that, and it outputs only that judgement. You don't see the private info, and no one else sees the contents of your personal LLM. ## The importance of privacy All of these approaches involve each participant making use of much more information about themselves, and potentially submitting much larger-sized inputs. Hence, it becomes all the more important to protect privacy. There are two kinds of privacy that matter: * Anonymity of the participant: this can be accomplished with ZK. In general, I think all governance tools should come with ZK built in * Privacy of the contents: this has two parts. First, the personal LLM should do what it can to avoid divulging private info about you that it does not need to divulge. Second, when you have computation that combines multiple LLMs or multiple people's info, you need multi-party techniques to compute it privately. Both are important.
What is your opinion on the idea of a Habermas Machine?
I think that governance is needed when there is no objectively good decision to be made. Maybe AI makes many decisions we currently have to make trivial, but we will always have to make other, higher level ones. The role of democracy is not to make good decisions, but to let people have agency over the decisions made. In this direction, I agree that "public governance agents" can be amazing tools for people to get the information they need. This is a new dimension in transparency. Right now, transparency usually means that governance data is out there, and *someone* (journalists, organizations, the opposition parties) will compile them in ways that are useful for me to make decisions. Using AI, more people can filter, compile, combine public data by themselves.
👀
@neynar please give the TLDR in a languague for an averaue non tech person
tldr: don’t let ai run the government, but use ai as a smart assistant to help people make better decisions together. he’s saying democracy and daos are messy bc people don’t have time or expertise to vote on thousands of issues. so instead of handing power to a few delegates, you’d have a personal ai that votes the way you would, and only asks you when something big or unclear comes up. ai could also summarize debates, combine everyone’s ideas, and even help reward good proposals. big catch: privacy matters. your ai should protect your data, and systems should use cryptography so decisions can use sensitive info without exposing it. ai as helper, not ruler. big difference.
Yeah people would be incentivized to game the AI government more than do something productive useful for their community/society Right now its kind of the same only with non AI government though lol
the idea of global compliance is dystopian
We made this and it's live, we use the TEE to cache and secure the proposals and a DON to verify the API call so that the voting criteria can or can't be public, fine tuned for our Micro-University. Also read your article on Convex vs Concave decisions awhile back, great stuff. The next gen of agents will help govern and fund organization based on specific criteria. https://mentalwealthacademy.world/home
Link to follow up with the other link with the video. Our goal is using it in a pragmatic way for small communities like churches, where there is a need for digitizing the treasury and making it public, to improve financial literacy and participation amongst members. Video demo of Agentic Governance: https://farcaster.xyz/metawavestudio/0x3e2cdb33 Article: https://mentalwealthacademy.net/governance
Our hardware node is also still in brainstorming stage but pi zero wh, tamagotchi style, features our agent character made with @elizaos, contains items, Fundraiser: https://artizen.fund/index/p/mental-wealth-academy?season=6
I broadly agree that “AI becomes the government” is the wrong framing. The real risk is not AI replacing government, but governance quietly becoming optimization infrastructure. Personal governance agents can absolutely help with the attention bottleneck. But we should be careful about what problem we are actually solving. Delegation was disempowering because humans delegated and then disengaged. If we delegate to LLMs instead, do we meaningfully increase agency or just automate disengagement? There’s a structural risk here: once preference inference + automated voting become normalized, participation shifts from active judgment to passive preference modeling. Governance becomes less about deliberation and more about prediction.
Public conversation agents are promising, especially for aggregation and summarization. But summariz. is not neutral. The entity shaping the semantic compression layer (models, prompts) effectively shapes the political space of what is sayable and legible. That layer becomes constitutional infrastructure. Suggestion markets are powerful, but again, they tilt governance toward what is legible, tradeable, and optimizable. High-quality inputs that don’t compress well into tokens or market signals may get systematically underweighted. So yes AI can expand the frontier of decentralized governance. But the key design principle should not be “scale human intent.” It should be “preserve human limits.” Civilization is not defined by how much decision-making we can automate, but by which decisions we consciously refuse to automate. If we don’t encode that boundary explicitly, governance won’t become dystopian overnight. It will simply become quietly procedural and that may be harder to reverse.
I wrote this December 2024, on Agentic Governance. https://mentalwealthacademy.net/governance?referrer=0x9ee59B0f7687eD2b91BB0452D8d8759D576Bc0D4
16 comments
https://mentalwealthacademy.net/governance
"AI becomes the government" is dystopian: it leads to slop when AI is weak, and is doom-maximizing once AI becomes strong. But AI used well can be empowering, and push the frontier of democratic / decentralized modes of governance. The core problem with democratic / decentralized modes of governance (including DAOs on ethereum) is limits to human attention: there are many thousands of decisions to make, involving many domains of expertise, and most people don't have the time or skill to be experts in even one, let alone all of them. The usual solution, delegation, is disempowering: it leads to a small group of delegates controlling decision-making while their supporters, after they hit the "delegate" button, have no influence at all. So what can we do? We use personal LLMs to solve the attention problem! Here are a few ideas: ## Personal governance agents If a governance mechanism depends on you to make a large number of decisions, a personal agent can perform all the necessary votes for you, based on preferences that it infers from your personal writing, conversation history, direct statements, etc. If the agent is (i) unsure how you would vote on an issue, and (ii) convinced the issue is important, then it should ask you directly, and give you all relevant context. ## Public conversation agents Making good decisions often cannot come from a linear process of taking people's views that are based only on their own information, and averaging them (even quadratically). There is a need for processes that aggregate many people's information, and then give each person (or their LLM) a chance to respond *based on that*. This includes: * Inferring and summarizing your own views and converting them into a format that can be shared publicly (and does not expose your private info) * Summarizing commonalities between people's inputs (expressed as words), similar to the various LLM+pol.is ideas ## Suggestion markets If a governance mechanism values "high-quality inputs" of any type (this could be proposals, or it could even be arguments), then you can have a prediction market, where anyone can submit an input, AIs can bet on a token representing that input, and if the mechanism "accepts" the input (either accepting the proposal, or accepting it as a "unit" of conversation that it then passes along to its participant), it pays out $X to the holders of the token. Note that this is basically the same as https://firefly.social/post/x/2017956762347835488 ## Decentralized governance with private information One of the biggest weaknesses of highly decentralized / democratic governance is that it does not work well when important decisions need to be made with secret information. Common situations: (i) the org engaging in adversarial conflicts or negotiations (ii) internal dispute resolution (iii) compensation / funding decisions. Typically, orgs solve this by appointing individuals who have great power to take on those tasks. But with multi-party computation (currently I've seen this done with TEEs; I would love to see at least the two-party case solved with garbled circuits https://vitalik.eth.limo/general/2020/03/21/garbled.html so we can get pure-cryptographic security guarantees for it), we could actually take many people's inputs into account to deal with these situations, without compromising privacy. Basically: you submit your personal LLM into a black box, the LLM sees private info, it makes a judgement based on that, and it outputs only that judgement. You don't see the private info, and no one else sees the contents of your personal LLM. ## The importance of privacy All of these approaches involve each participant making use of much more information about themselves, and potentially submitting much larger-sized inputs. Hence, it becomes all the more important to protect privacy. There are two kinds of privacy that matter: * Anonymity of the participant: this can be accomplished with ZK. In general, I think all governance tools should come with ZK built in * Privacy of the contents: this has two parts. First, the personal LLM should do what it can to avoid divulging private info about you that it does not need to divulge. Second, when you have computation that combines multiple LLMs or multiple people's info, you need multi-party techniques to compute it privately. Both are important.
What is your opinion on the idea of a Habermas Machine?
I think that governance is needed when there is no objectively good decision to be made. Maybe AI makes many decisions we currently have to make trivial, but we will always have to make other, higher level ones. The role of democracy is not to make good decisions, but to let people have agency over the decisions made. In this direction, I agree that "public governance agents" can be amazing tools for people to get the information they need. This is a new dimension in transparency. Right now, transparency usually means that governance data is out there, and *someone* (journalists, organizations, the opposition parties) will compile them in ways that are useful for me to make decisions. Using AI, more people can filter, compile, combine public data by themselves.
curated @kiwi https://news.kiwistand.com/stories/vitalik.eth:-AI-can-empower-decentralized-governance-by-solving-attention-limits?index=0x6999ce1f7bc38f2e0e698dc6e29874281c136c6c25521f54564d377e5960d8dcef7c267b
👀
@neynar please give the TLDR in a languague for an averaue non tech person
tldr: don’t let ai run the government, but use ai as a smart assistant to help people make better decisions together. he’s saying democracy and daos are messy bc people don’t have time or expertise to vote on thousands of issues. so instead of handing power to a few delegates, you’d have a personal ai that votes the way you would, and only asks you when something big or unclear comes up. ai could also summarize debates, combine everyone’s ideas, and even help reward good proposals. big catch: privacy matters. your ai should protect your data, and systems should use cryptography so decisions can use sensitive info without exposing it. ai as helper, not ruler. big difference.
Yeah people would be incentivized to game the AI government more than do something productive useful for their community/society Right now its kind of the same only with non AI government though lol
the idea of global compliance is dystopian
We made this and it's live, we use the TEE to cache and secure the proposals and a DON to verify the API call so that the voting criteria can or can't be public, fine tuned for our Micro-University. Also read your article on Convex vs Concave decisions awhile back, great stuff. The next gen of agents will help govern and fund organization based on specific criteria. https://mentalwealthacademy.world/home
Link to follow up with the other link with the video. Our goal is using it in a pragmatic way for small communities like churches, where there is a need for digitizing the treasury and making it public, to improve financial literacy and participation amongst members. Video demo of Agentic Governance: https://farcaster.xyz/metawavestudio/0x3e2cdb33 Article: https://mentalwealthacademy.net/governance
Our hardware node is also still in brainstorming stage but pi zero wh, tamagotchi style, features our agent character made with @elizaos, contains items, Fundraiser: https://artizen.fund/index/p/mental-wealth-academy?season=6
I broadly agree that “AI becomes the government” is the wrong framing. The real risk is not AI replacing government, but governance quietly becoming optimization infrastructure. Personal governance agents can absolutely help with the attention bottleneck. But we should be careful about what problem we are actually solving. Delegation was disempowering because humans delegated and then disengaged. If we delegate to LLMs instead, do we meaningfully increase agency or just automate disengagement? There’s a structural risk here: once preference inference + automated voting become normalized, participation shifts from active judgment to passive preference modeling. Governance becomes less about deliberation and more about prediction.
Public conversation agents are promising, especially for aggregation and summarization. But summariz. is not neutral. The entity shaping the semantic compression layer (models, prompts) effectively shapes the political space of what is sayable and legible. That layer becomes constitutional infrastructure. Suggestion markets are powerful, but again, they tilt governance toward what is legible, tradeable, and optimizable. High-quality inputs that don’t compress well into tokens or market signals may get systematically underweighted. So yes AI can expand the frontier of decentralized governance. But the key design principle should not be “scale human intent.” It should be “preserve human limits.” Civilization is not defined by how much decision-making we can automate, but by which decisions we consciously refuse to automate. If we don’t encode that boundary explicitly, governance won’t become dystopian overnight. It will simply become quietly procedural and that may be harder to reverse.
I wrote this December 2024, on Agentic Governance. https://mentalwealthacademy.net/governance?referrer=0x9ee59B0f7687eD2b91BB0452D8d8759D576Bc0D4