Rethinking ALP and FAR in the Age of Artificial Intelligence: How AI is reshaping transfer pricing’s foundational principles — from control and substance to profit attribution
- Posted by admin
- On October 20, 2025
- 0 Comments
Artificial Intelligence (AI) has moved from experimental adoption to operational dominance. It is now central to how multinational enterprises (MNEs) generate and distribute value driving dynamic pricing, inventory forecasting, supply chain optimisation, and even strategic decision-making.
This evolution challenges the traditional architecture of transfer pricing, particularly the Arm’s Length Principle (ALP) and Functional, Asset, and Risk (FAR) analysis. These frameworks were built in an era where human activity defined substance, geography determined value attribution, and judgment shaped control. AI undermines each of these assumptions.
The time has come to re-examine how we interpret functions, allocate risk, and measure control in a world where economic value is increasingly generated by algorithms rather than people
Characterising AI-Driven Activities: Functions or Tools?
In FAR analysis, the line between “functions performed” and “tools used” has always been clear. Humans performed the functions, strategy, management, and operations, while machines and software were merely tools enabling those functions.
AI collapses this distinction. An AI system that autonomously determines global pricing or reorders inventory does not merely assist human decision-making; it performs it. The question is no longer academic:
- Should algorithmic decision-making be recognised as a function in its own right?
- Or should it still be treated as a tool, with value attributed only to the humans who designed or oversee it?
This uncertainty disrupts the established logic of FAR. Treating AI merely as a tool risks undervaluing its contribution; treating it as a function requires redefining “who” performs that function, the coder, the owner, or the algorithm itself.
Locating AI Activity: Where Should Profits Be Attributed?
The ALP has traditionally linked profit attribution to where people are located. Developers, managers, and executives performing key functions were seen as the anchors of value creation.
AI upends this geography. Algorithms execute decisions from servers that may be located in data centres across multiple jurisdictions, often with no local employees or physical operations.
So where should profits be attributed?
- To the jurisdiction hosting the servers that execute AI-driven decisions?
- To the entity that developed, owns, or licenses the AI system?
- Or to the jurisdiction where humans oversee or interpret its results?
This dislocation between economic activity and human presence exposes a foundational weakness in the ALP’s geographic model—profits no longer necessarily “follow the people.”
DEMPE and AI as an Intangible Asset
Treating AI as an intangible under the DEMPE framework, Development, Enhancement, Maintenance, Protection, and Exploitation raises equally complex questions.
Each of the DEMPE functions takes on a new meaning in the AI context:
- Development may occur through iterative machine learning, drawing on global data without direct human coding.
- Enhancement could happen automatically as algorithms self-improve over time.
- Maintenance and protection are embedded in code, often through self-healing software and cybersecurity protocols.
- Exploitation occurs through APIs, platforms, or cloud services that transcend borders.
In this environment, identifying who performs or controls DEMPE functions becomes difficult. Is it the entity that provides the training data, the one that codes the algorithm, or the one that commercially deploys it? Without clear control or decision-making by humans, traditional DEMPE attribution may no longer accurately reflect economic reality.
Risk Allocation When AI Makes Autonomous Decisions
Transfer pricing principles dictate that risk should follow control, the entity that assumes a risk must also have the ability to manage it. But AI-driven systems challenge this logic.
Consider an algorithm that autonomously adjusts prices, orders inventory or manages logistics. When outcomes deviate, overstocking, pricing errors, or supply chain inefficiencies, who bears the risk?
- The entity that developed the algorithm?
- The entity that owns the data feeding it?
- Or the entity whose financials are affected by its decisions?
If no human actively monitors or overrides these actions, attributing “control” under the existing rules becomes problematic. Ignoring AI-generated risks, however, would distort the economic alignment that ALP seeks to preserve.
Can the Arm’s Length Principle Still Apply?
The ALP is built on comparability, the idea that transactions between related parties can be benchmarked against independent ones. But what comparable exists for an AI-driven enterprise with self-learning algorithms, automated procurement, and fully digital customer engagement?
Traditional comparables assume human-led business models. As AI-driven models multiply, the universe of valid benchmarks narrows, and the reliability of traditional transfer pricing methods diminishes.
Unless the ALP evolves to account for non-human value creation, it risks losing relevance in transactions where no human analogue exists.
Substance and Control: Does Human Judgment Still Matter?
Tax substance has historically depended on human involvement, boards, managers, and executives making decisions that reflect economic reality.
AI questions whether substance requires human judgment at all. If an algorithm continuously monitors performance, adapts strategies, and executes decisions based on pre-set parameters, can that constitute “control”?
Should the entity that supervises or configures such AI be considered as exercising functional control, even if human input is minimal? Tax authorities will increasingly need to decide whether machine-led governance can qualify as a form of substance under existing rules.
Servers and the Attribution of Profits
Another emerging question is whether servers hosting AI systems constitute permanent establishments (PEs).
Some tax authorities argue that servers executing proprietary algorithms perform essential value-creating functions, making them more than passive infrastructure. Others insist that without human presence, there can be no PE.
This debate exemplifies the broader challenge: as economic activity becomes detached from physical and human presence, the basis for jurisdictional taxing rights must evolve or risk becoming obsolete.
The Way Forward
AI exposes the limits of transfer pricing’s human-centric foundations. The frameworks of ALP and FAR rooted in human judgment, control, and geography must now adapt to a world where algorithms, data, and cloud networks drive value creation.
For MNEs and policymakers, the path ahead involves:
- Recognising AI-driven functions alongside human functions in FAR analyses;
- Reassessing DEMPE roles where AI contributes autonomously;
- Establishing risk allocation principles when control is partially or entirely algorithmic; and
- Redefining substance and profit attribution to reflect both human and machine-led value creation.
We are entering a transitional era caught between traditional rules that no longer fit and new frameworks yet to be written. The challenge, and the opportunity, is to evolve transfer pricing principles without losing sight of their underlying goals: fairness, neutrality, and alignment with economic value.
AI is not just a technological leap – it is a structural shift redefining what “function,” “control,” and “value” mean in global taxation.
The age of AI demands a rethink of transfer pricing fundamentals. As algorithms become economic actors, not just tools, international tax frameworks must evolve to reflect a new reality – one where value creation may no longer require human hands, yet still demands human accountability.


0 Comments