A network of tech donors skeptical of artificial intelligence is paying to insert reporters in major newsrooms who go on to report stories amplifying their concerns about risks in AI development, a Washington Examiner review has found.
The Tarbell Center for AI Journalism’s fellowship program places reporters at prestigious news organizations such as Bloomberg, TIME, the Information, the Guardian, NBC News, the Verge, the Los Angeles Times, and others while simultaneously paying their salaries. The group paying those salaries is funded through donations from Open Philanthropy, the EA Infrastructure Fund, the Future of Life Institute, and the Survival and Flourishing Fund, all of which have strong connections to the effective altruism movement.
Effective altruists broadly believe that the development of AI, if not carefully managed, could seriously harm or even destroy the human race. As such, they are often deeply invested in debates surrounding AI safety and regulation.
A Washington Examiner review of articles published by the Tarbell Center’s 2025 fellowship cohort found that AI safety and regulation were among the topics that appeared most frequently in their coverage. Many of the angles covered by Tarbell Center fellows are popular with AI safety advocates in the effective altruism movement.
Fellows published pieces reporting on concerns that the lack of guardrails imposed on AI chatbots could enable self-harm or eating disorders. Reports on the alleged risk to the security of sensitive data posed by using AI models were also prevalent. Other fellows wrote stories on the potential dangers of AI reporting on topics such as the proliferation of deepfakes and AI-generated misinformation, the tendency of some AI models to pursue their own ends while misleading users, AI-enabled cheating, alleged bigotry among AI models, concerns that AI may alter the historical record, and the purportedly excessive energy use by AI datacenters.

In one case, the Tarbell Center’s fellow at NBC News wrote a piece covering accusations that OpenAI is using legal tactics to silence its opposition, including the Future of Life Institute. NBC included a disclosure that the Tarbell Center, which pays the fellow’s salary, is partially funded by the Future of Life Institute.
The Tarbell Center states on its website that neither it nor its donors exerts editorial control over the content produced by fellows. It also asserts that applicants for its fellowship program are evaluated based on “journalistic excellence and understanding of AI, not ideological alignment.”
“The Tarbell Center maintains strict editorial independence from all funders and donors,” Cillian Crosson, the organization’s executive director, told the Washington Examiner. “Our fellows and their host newsrooms maintain complete autonomy from Tarbell, and we’re proud of the role we play in supporting independent journalism from a wide range of perspectives.”
Indeed, a handful of stories published by Tarbell Center fellows discuss positive applications of AI, but these same authors often pen negative pieces as well. The center’s fellow at the South China Morning Post, for example, wrote a piece in October about how a Chinese technology firm reported an increase in research efficiency from AI integration and, a few days later, reported on a study that found AI models could be negatively impacting mental health through excessive flattery.
Although some of the headlines written by the Tarbell Center’s fellows contain positive news about AI, studies have shown that negative headlines attract far more attention from readers. Negative headlines about AI, as it happens, often align more with the concerns of many effective altruists than positive ones.
Open Philanthropy, which the Tarbell Center identifies as its largest donor, is run by Dustin Moskovitz, co-founder of Meta. Moskovitz has publicly identified with effective altruism, and Open Philanthropy has been described as a “pillar” of the effective altruism community, funding a range of other groups aligned with its cause.

Moskovitz, alongside other notable figures, signed a declaration in 2023 stating that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
The Future of Life Institute, which is itself funded by Open Philanthropy, also works in effective altruist circles to mitigate AI risk. The Tarbell Center reports receiving between $100,000 and $1 million from the organization. The donation does not appear in the Future of Life Institute’s most recent tax filings, which covers its activities through the end of 2023, meaning that its contribution occurred after that point.
Survival and Flourishing Fund, which also gave between $100,000 and $1 million to the Tarbell Center, is seen by effective altruists as a reliable source for funding, given its past support of their projects, and Andrew Critch, one of the organization’s co-founder, posts on the primary effective altruism forum about AI risk and grant opportunities.
With big money comes structural advantages.
While the stories published by Tarbell Center fellows are subject to the same fact-checking procedures as other pieces produced by their host outlets, the funding behind the fellows allows their coverage to be amplified. For instance, NBC News announced major layoffs in October, reportedly over budget concerns, but on-boarded their Tarbell Center fellow, whose salary is paid by a third party, just a month prior.
The Tarbell Center’s fellowship program also appears to be growing, increasing from seven fellows in its 2024 cohort to 16 for its 2025 class.
As the media landscape continues to be rattled by layoffs and other financial struggles, effective altruists are provided with much-needed funding for ventures they support.
The Argument, a center-left outlet employing columnists including Matt Bruenig and Matthew Yglesias, received $1 million in funding from Open Philanthropy in June “to support journalism on policies related to abundance, progress, and economic growth.” Since then, the outlet has published headlines such as “We need to be able to sue AI companies” and “ChatGPT and the end of learning,” as well as a positive review of a book arguing that AI companies are risking the destruction of the human race.
EFFECTIVE ALTRUISM-LINKED NONPROFIT PLACED FELLOWS IN KEY GOVERNMENT OFFICES, SECURED INFLUENCE
Beyond media, effective altruist-linked organizations have also looked to inform policymaking conversations directly through fellowship programs. The Washington Examiner previously reported that the Horizon Institute for Public Service, a nonprofit organization funded by the effective altruist-linked foundations like Open Philanthropy, had deployed a small army of fellows at influential think tanks and government offices across Washington, D.C., and that those fellows, at times, work on policy affecting the goals of effective altruists.
