OpenAI recently released a 13-page policy paper detailing its proposals for managing the impact of artificial intelligence on the American workforce. The company suggested imposing higher capital gains taxes on corporations that replace workers with AI, with the revenue allocated to expand the public safety net. Proposed initiatives included establishing a public wealth fund, implementing a four-day workweek financed by “efficiency dividends,” and creating government programs to facilitate workers’ transition into “human-centered” roles, all to be funded by the anticipated abundance generated by AI.
The release of the paper coincided with the publication of an extensive article by The New Yorker’s Ronan Farrow and Andrew Marantz. This report detailed a history of alleged misrepresentations by OpenAI CEO Sam Altman to various stakeholders, including investors, employees, the board, and lawmakers involved in AI regulation. Critics suggested this narrative reinforces concerns that the company may prioritize financial and political objectives over its publicly stated idealistic values.
While some observers acknowledged the policy paper introduced valuable new concepts into the discourse surrounding AI governance, others expressed skepticism about OpenAI’s commitment to these proposals. Past instances of the company’s engagement with government policy have drawn criticism. Sam Altman has publicly advocated for federal AI oversight, even proposing a federal agency in 2023. However, the New Yorker piece highlighted accusations that he privately worked to undermine AI safety legislation.
A California legislative aide reportedly accused OpenAI of “increasingly cunning, deceptive behavior” to defeat a 2023 state AI safety bill that the company had publicly supported. Furthermore, in 2025, OpenAI reportedly issued subpoenas to supporters of a California state-level AI bill, an action described by one recipient as an attempt to “scare them into shutting up.” Despite earlier collaboration with the Biden administration on AI safety standards, Altman reportedly persuaded the subsequent Trump administration to discontinue these initiatives.
Malo Bourgon, CEO of the Machine Intelligence Research Institute (MIRI), noted that while the paper’s authors likely intended good work, questions remain regarding whether the company’s actions will align with its stated values. Nathan Calvin, general counsel at AI policy nonprofit Encode, described OpenAI’s policy and government affairs engagement as “abysmal,” expressing skepticism about the company’s follow-through on its proposed principles beyond general statements.
Source: Original

