|
||
|
||
Steering AI towards the public interest A recent paper provides a blueprint for fostering innovation in artificial intelligence outside the dictates of Big Tech. Lean Ka-Min AS countries rush to embrace the potential of artificial intelligence, they must take care not to fall into the clutches of dependency on the powerful technology corporations. A tech policy expert has outlined how governments may seek to build AI systems that are not beholden to Big Tech but more aligned with the public interest. In a recent paper, Burcu Kilic charts a course that countries can take towards the development of an independent and resilient AI sector using the tools of industrial policy, which can be defined in simple terms as ‘any state intervention promoting specific industries or activities’. Kilic is a tech and human rights fellow at the Carr Center, Harvard Kennedy School, and a senior fellow with the Canada-based Centre for International Governance Innovation (CIGI), which published her paper in March. The paper, entitled ‘AI, Innovation and the Public Good: A New Policy Playbook’, is available on the CIGI website Currently, Kilic notes, most AI strategies end up being AI adoption strategies reliant on the digital infrastructure developed by major tech companies like Amazon, Google and Microsoft. It is not difficult to see why: the design, training and running of AI systems require vast amounts of computing power and data and sophisticated cloud-based infrastructure – resources controlled by Big Tech. Yet many countries, desperate not to miss the AI boat, aim to jump on in a big way, going for large-scale implementation. Being more resource-intensive, this ‘bigger is better’ approach, cautions Kilic, only reinforces dependency on the dominant platforms. Instead of adding to the imbalance of power, countries can develop more autonomous AI models with the right mix of industrial policies focused on spurring innovation in line with national priorities. Domestic innovation capabilities can be enhanced, Kilic says, through cooperation between local researchers, universities, technologists, companies and investors to ‘build an equitable infrastructure that provides access to compute power and clean data sets. This framework would promote a collaborative model, empowering civil society, local communities, researchers and local innovators to participate in designing and developing AI systems. In the long run, this would reduce reliance on big tech companies for infrastructure, public services and technological needs’. ‘Government demand,’ stresses Kilic, ‘can be a powerful driver of local innovation, whether by procuring new AI systems or investing in infrastructure.’ Accordingly, government procure-ment policies should be crafted to support local innovation where feasible and prioritise domestic players while bearing in mind any constraints thereon under the country’s trade agreements. The government is also key to supporting AI research and development (R&D). Its role has become all the more crucial given that Big Tech is now investing heavily in basic research in this field – and therefore increasingly influencing research priorities – departing from the traditional arrangement where industry would look to the universities for fundamental research. Beyond basic research, suggests Kilic, public R&D initiatives should also ‘support applied research, foster collaboration with the local industry, and address broader societal and economic dimensions of AI development’. It is precisely such social concerns that stand to be given short shrift by industry-backed research, which pays less attention to issues of robustness, interpretability, fairness and security despite the public’s strong interest in ensuring AI models are trustworthy, says Kilic. She thus calls for public funding to ‘prioritise areas that align with broader societal needs, including interpretability, defensive cybersecurity, benchmarking and evaluations, and privacy-preserving machine learning’, which are ‘essential for ensuring AI systems are reliable, equitable and secure in their applications’. There is also an imbalance in access to specialised computing resources, data sets and top human talent between the major tech companies and universities, especially non-elite universities. This gap, warns Kilic, ‘threatens to undermine the long-term research and training functions traditionally performed by universities, hobbling their ability to sustain innovation and educate the next generation of AI talent’. To bridge the computing divide between industry and academia, proposals have been put forward for governments to invest in a ‘national research cloud’. Such infrastructure, Kilic emphasises, must remain independent of tech companies and promote public-interest research free from corporate influence, without the investment ending up as a research subsidy for the tech firms. The success of industrial and innovation policy demands coordination in government policymaking beyond innovation agencies to encompass multiple policy domains and sectoral ministries in a ‘whole-of-government’ approach, underlines Kilic. Governments, she says, should not give up in the face of the current Big Tech dominance but should ‘embrace these complexities as opportunities to shape policies that rebuild the AI ecosystem from the ground up’. In this regard, the whole-of-government approach can be complemented by ‘participatory policymaking, where civil society, local communities, workers and researchers can help design AI policies’. The policy journey can begin by focusing on smaller AI models, which Kilic says provides a more practical and achievable foundation for AI development. ‘Rejecting the blind replication of the US tech model characterised by data extraction, commodification and market concentration creates a space for responsible, equitable and democratic innovation that prioritises productivity and social goals. … Ultimately, it all comes down to balancing the need to foster innovation with serving the public interest.’ The industrial policy measures discussed above will also need to be supported by a country’s competition and trade policies, says Kilic. Competition policy comes into play when dealing with the market power of the tech giants. Antitrust laws can be invoked to investigate and prohibit unfair and anti-competitive practices such as self-preferencing, tying, exploiting customers and restricting access to key inputs, thereby allowing the entry of new players, greater choice for businesses and consumers, and more scope for innovation. Structural interventions like blocking anti-competitive mergers and requiring asset divestments, Kilic says, are more effective than behavioural remedies that seek to regulate a company’s behaviour instead of changing its structure. According to Kilic, competition policy should strike ‘a balance between short- and long-term priorities, price effects versus investment incentives and consumer interests versus local industries’. However, she contends that conventional competition policy may not be up to the task, as it is rooted in a neoliberal framework that prioritises productive efficiency and seeks to maximise the ‘narrow concept’ of consumer welfare at the potential expense of the broader national interest in promoting productive and dynamic industries. Competition policy thus needs to be aligned with a country’s national innovation strategy. ‘When carefully designed, industrial and competition policies can work together to foster innovation, market fairness and sustainable development,’ asserts Kilic. ‘Without such efforts, countries risk remaining participants rather than creators in the digital economy.’ As with competition policy, trade policy in its neoliberal incarnation is not conducive to industrial policy success. International trade agreements often restrict government actions to build up domestic industry, viewing such measures as trade barriers that impede the workings of the free market. When it comes to AI, the current global regime for trade in digital goods and services ‘fails to support AI industrial policy; instead, it reinforces structural dependencies and increases reliance on big tech companies’, Kilic laments. In place of this constricting framework, she recommends that countries revisit their trade commitments and reclaim the policy space to implement measures to effect more inclusive digital development. Such space would necessarily cover all areas of the digital economy. It would encompass not only the AI industrial policies discussed in Kilic’s paper but also other, broader policy initiatives aimed at securing digital sovereignty – the capacity to ‘steer the development of science and technology, so citizens can access, understand, and produce technology that truly improves their lives’.1 Proposals towards this end have envisioned open and decentralised public digital infrastructure, and include calls for a ‘Digital Non-Aligned Movement’ of nations collaborating on digital transformation beyond the sway of the tech powers.2 Whether it’s AI or other digital innovations, escaping the stranglehold of Big Tech will not be easy, but the need to ensure technology empowers, not subjugates, demands nothing less. Lean Ka-Min is editor of Third World Resurgence. Notes
*Third World Resurgence No. 363, 2025/2, pp 9-10 |
||
|