Eliezer Yudkowsky
| Eliezer Yudkowsky | |
|---|---|
| Yudkowsky at Stanford University in 2006 | |
| Born | Eliezer Shlomo Yudkowsky September 11, 1979 | 
| Organization | Machine Intelligence Research Institute | 
| Known for | Coining the term friendly artificial intelligence Research on AI safety Rationality writing Founder of LessWrong | 
| Website | www | 
Eliezer S. Yudkowsky (/ˌɛliˈɛzər jʌdˈkaʊski/ EL-ee-EZ-ər yud-KOW-skee; born September 11, 1979) is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence. He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.