The idea of an “algorithm tax” has gained traction as artificial intelligence reshapes economies and threatens to erode traditional tax bases. If machines increasingly perform tasks once done by humans, the logic goes, governments must find new ways to capture value and fund the social state. At first glance, taxing algorithms or AI systems seems like a natural solution. A closer look at the economics of software, however, suggests that this approach rests on a misunderstanding of how value is created and distributed in digital markets.
Software is not inherently valuable. It only becomes value when someone is willing to pay for it. This may sound obvious, but it is often overlooked in discussions about AI. The mere existence of an algorithm does not generate economic output. Without demand, software is just unused potential. As AI dramatically lowers the cost of creating software, the supply of digital products is exploding. Code that once required teams of engineers can now be generated quickly and cheaply, sometimes almost instantly.
This leads to a fundamental economic effect. When supply increases while demand remains limited, prices fall. Software already has near-zero marginal costs, meaning that once it is created, distributing additional copies is essentially free. AI amplifies this dynamic by reducing not only marginal costs but also development costs and barriers to entry. The result is a flood of software competing for finite user attention and purchasing power. In competitive markets, this drives prices toward zero and eliminates profits.
If profits disappear, so does the tax base. An algorithm tax implicitly assumes that AI will generate widespread, taxable value. In reality, the opposite may happen. Most software will become commoditized and barely monetizable. The economic gains will not be broadly distributed across countless developers and firms. Instead, they will concentrate in a small number of dominant platforms that benefit from scale, data advantages, and network effects.
This concentration changes the nature of the problem. The challenge is not that AI creates too much value to tax, but that it concentrates value in very few places. In such an environment, a general tax on algorithms becomes difficult to define and even harder to enforce. What exactly is being taxed: lines of code, models, outputs, or usage? Any broad definition risks capturing activities that generate little or no profit, while missing the highly profitable structures where value actually accumulates.
Even if policymakers focus on global tech giants, market mechanisms still apply. These firms operate in competitive and regulatory environments that shape their behavior. If taxed heavily in one jurisdiction, they may shift operations, adjust pricing, or pass costs on to consumers and business users. In digital markets, where services are easily scalable and mobile, tax incidence does not simply fall on the targeted company. It propagates through the system, often in unintended ways. Higher costs can reduce investment, limit innovation, or reinforce the dominance of incumbents that are best equipped to absorb regulatory burdens.
This does not mean that taxation is futile, but it does suggest that the focus should shift. Instead of taxing algorithms as such, it may be more effective to target the sources of economic power that remain robust under these conditions. These include monopoly rents, control over data, and the ownership of digital infrastructure. Alternatively, broader tax bases such as consumption or capital may prove more stable in a world where labor plays a diminishing role.
The rise of AI does create a real fiscal challenge, but it is not solved by assuming that algorithms themselves are a reliable source of taxable value. Markets still function, prices still adjust, and value still concentrates. Any viable policy must start from these basic economic realities rather than from the seductive but misleading idea that software, by its mere existence, can carry the weight of the social state.