From Surf Wiki (app.surf) — the open knowledge base
Eliezer Yudkowsky
American AI researcher and writer (born 1979)
American AI researcher and writer (born 1979)
| Field | Value |
|---|---|
| name | Eliezer Yudkowsky |
| image | Eliezer_Yudkowsky,_Stanford_2006_(square_crop).jpg |
| caption | Yudkowsky at Stanford University in 2006 |
| birth_name | Eliezer Shlomo (or Solomon) Yudkowsky |
| birth_date | |
| birth_place | Chicago, Illinois, U.S. |
| death_date | |
| organization | Machine Intelligence Research Institute |
| known_for | Coining the term friendly artificial intelligence |
| Research on AI safety | |
| Rationality writing | |
| Founder of LessWrong | |
| signature | |
| website |
Research on AI safety Rationality writing Founder of LessWrong Eliezer Shlomo Yudkowsky ( ; born September 11, 1979) is an American artificial intelligence researcher and writer on decision theory and ethics, known for popularizing ideas related to friendly artificial intelligence. He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies. He is best known for If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All, a New York Times Best Seller he co-authored with Nate Soares, as well as the Harry Potter fanfiction Harry Potter and the Methods of Rationality.
Work in artificial intelligence safety
Goal learning and incentives in software systems
Yudkowsky's views on the safety challenges future generations of AI systems pose are discussed in Stuart Russell's and Peter Norvig's undergraduate textbook Artificial Intelligence: A Modern Approach. Noting the difficulty of formally specifying general-purpose goals by hand, Russell and Norvig cite Yudkowsky's proposal that autonomous and adaptive systems be designed to learn correct behavior over time:
In response to the instrumental convergence concern, which implies that autonomous decision-making systems with poorly designed goals would have default incentives to mistreat humans, Yudkowsky and other MIRI researchers have recommended that work be done to specify software agents that converge on safe default behaviors even when their goals are misspecified. Yudkowsky also proposed in 2004 a theoretical AI alignment framework called coherent extrapolated volition, which involves designing AIs to pursue what people would desire under ideal epistemic and moral conditions.
Capabilities forecasting
In the intelligence explosion scenario hypothesized by I. J. Good, recursively self-improving AI systems quickly transition from subhuman general intelligence to superintelligence. Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies sketches out Good's argument in detail, while citing Yudkowsky on the risk that anthropomorphizing advanced AI systems will cause people to misunderstand the nature of an intelligence explosion. "AI might make an apparently sharp jump in intelligence purely as the result of anthropomorphism, the human tendency to think of 'village idiot' and 'Einstein' as the extreme ends of the intelligence scale, instead of nearly indistinguishable points on the scale of minds-in-general."
In Artificial Intelligence: A Modern Approach, Russell and Norvig raise the objection that there are known limits to intelligent problem-solving from computational complexity theory; if there are strong limits on how efficiently algorithms can solve various tasks, an intelligence explosion may not be possible.
''Time'' op-ed
In a 2023 op-ed for Time magazine, Yudkowsky discussed the risk of artificial intelligence and advocated for international agreements to limit it, including a total halt on the development of AI. He suggested that participating countries should be willing to take military action, such as "destroy[ing] a rogue datacenter by airstrike", to enforce such a moratorium. The article helped introduce the debate about AI alignment to the mainstream, leading a reporter to ask President Joe Biden a question about AI safety at a press briefing.
''If Anyone Builds It, Everyone Dies''
Together with Nate Soares, Yudkowsky wrote If Anyone Builds It, Everyone Dies, which was published by Little, Brown and Company on September 16, 2025.
Rationality writing
Between 2006 and 2009, Yudkowsky and Robin Hanson were the principal contributors to Overcoming Bias, a cognitive and social science blog sponsored by the Future of Humanity Institute of Oxford University. In February 2009, Yudkowsky founded LessWrong, a "community blog devoted to refining the art of human rationality". Overcoming Bias has since functioned as Hanson's personal blog.
Over 300 blog posts by Yudkowsky on philosophy and science (originally written on LessWrong and Overcoming Bias) were released as an ebook, Rationality: From AI to Zombies, by MIRI in 2015. This book is also referred to as The Sequences. MIRI has also published Inadequate Equilibria, Yudkowsky's 2017 ebook on societal inefficiencies.
Yudkowsky has also written several works of fiction. His fanfiction novel Harry Potter and the Methods of Rationality uses plot elements from J. K. Rowling's Harry Potter series to illustrate topics in science and rationality.
Personal life
Yudkowsky is an autodidact and did not attend high school or college. He is Jewish and was raised as a Modern Orthodox Jew, but is now secular.
Bibliography
Books
Selected publications
References
References
- {{YouTube. mEt1Wfl1jvo. "Eliezer Yudkowsky on 'Three Major Singularity Schools'"
- Silver, Nate. (2023-04-10). "How Concerned Are Americans About The Pitfalls Of AI?". [[FiveThirtyEight]].
- Ocampo, Rodolfo. (2023-04-04). "I used to work at Google and now I'm an AI researcher. Here's why slowing down AI development is wise".
- Gault, Matthew. (2023-03-31). "AI Theorist Says Nuclear War Preferable to Developing Advanced AI".
- (2009). "Artificial Intelligence: A Modern Approach". Prentice Hall.
- Leighton, Jonathan. (2011). "The Battle for Compassion: Ethics in an Apathetic Universe". Algora.
- Kurzweil, Ray. (2005). "The Singularity Is Near". Viking Penguin.
- Ford, Paul. (February 11, 2015). "Our Fear of Artificial Intelligence".
- Yudkowsky, Eliezer. (2008). "Global Catastrophic Risks". Oxford University Press.
- (2015). "Corrigibility". AAAI Publications.
- (2014). "Superintelligence: Paths, Dangers, Strategies". Oxford University Press.
- Moss, Sebastian. (2023-03-30). ""Be willing to destroy a rogue data center by airstrike" - leading AI alignment researcher pens Time piece calling for ban on large GPU clusters". Data Center Dynamics.
- Ferguson, Niall. (2023-04-09). "The Aliens Have Landed, and We Created Them". [[Bloomberg News.
- Hutson, Matthew. (2023-05-16). "Can We Stop Runaway A.I.?".
- "If Anyone Builds It, Everyone Dies". [[Little, Brown and Company]].
- Miller, James. (2012). "Singularity Rising". BenBella Books, Inc..
- Miller, James. (July 28, 2011). "You Can Learn How To Become More Rational".
- "Rifts in Rationality – New Rambler Review".
- Metz, Cade. (2025-08-04). "The Rise of Silicon Valley's Techno-Religion".
- Machine Intelligence Research Institute. "Inadequate Equilibria: Where and How Civilizations Get Stuck".
- Snyder, Daniel D.. (2011-07-18). "'Harry Potter' and the Key to Immortality".
- Packer, George. (28 November 2011). "No Death, No Taxes".
- (June 19, 2019). "He co-founded Skype. Now he's spending his fortune on stopping dangerous AI.". [[Vox (website).
- Saperstein, Gregory. (August 9, 2012). "5 Minutes With a Visionary: Eliezer Yudkowsky".
- (2022-12-01). "Synagogues are joining an 'effective altruism' initiative. Will the Sam Bankman-Fried scandal stop them?". [[Jewish Telegraphic Agency]].
- Yudkowsky, Eliezer. (October 4, 2007). "Avoiding your belief's real weak points".
This article was imported from Wikipedia and is available under the Creative Commons Attribution-ShareAlike 4.0 License. Content has been adapted to SurfDoc format. Original contributors can be found on the article history page.
Ask Mako anything about Eliezer Yudkowsky — get instant answers, deeper analysis, and related topics.
Research with MakoFree with your Surf account
Create a free account to save articles, ask Mako questions, and organize your research.
Sign up freeThis content may have been generated or modified by AI. CloudSurf Software LLC is not responsible for the accuracy, completeness, or reliability of AI-generated content. Always verify important information from primary sources.
Report