Artificial Intelligence (AI) innovations are being developed and launched at a faster rate than most of the world can keep up. Just two years ago, only a handful of people knew about the existence of ChatGPT; today, this large language model (LLM) touts over 180 million users as per the latest
data
. More LLMs have also launched within this period with big tech firms all racing to dominate the AI space.
But are the current checks enough to hold innovators and stakeholders in the AI industry accountable? This is the billion dollar question. While it is no secret that generative AI tools are significantly enhancing productivity across several industries, a deeper look at the fundamentals raises some ethical and data integrity questions. There have been several instances where these LLMs have been faulted for being biased.
The Danger of Centralization in AI Development
As it stands, most of the development in the AI ecosystem is being done by a handful of entities, most of which make up the
magnificent 7 stocks
in the S&P 500. These are the big tech companies including the likes of Microsoft which owns a huge stake in ChatGPT’s parent company OpenAI, and Nvidia which controls close to 80% of the global GPU semiconductor chip market.
Of course, it is arguable that leading tech companies based in the U.S. have every right to compete in a free and fair market on the principles of capitalism. But at the same time, their dominance in AI poses a huge risk; how can we entrust the future of technology with entities that have in the past proven untrustworthy with our data? What happens if AI grows as big as the internet in the hands of a few corporations?
These are some of the questions that we should be asking. Already the signs on the wall are showing that the average AI consumer might be giving up more value (data) than they think in the name of ‘free’ tools. What’s more is that the current policies are not sufficient enough to tame any bias that big tech might have in developing AI tools. Here are a few major instances that should be raising eyebrows about leaving AI to centralized entities:
Gemini’s Racial Bias Allegations:
Google’s LLM came under criticism for biased image outputs. This unfortunate event triggered questions on the scope of the input training data or whether it was a plain cognitive bias by the trainers.
ChatGPT data privacy violation:
OpenAI has been
questioned
by multiple regulators in different jurisdictions on its data privacy measures. While nothing conclusive has materialized, these concerns are valid in age where data is the new oil.
Microsoft’s AI assistant labeled a ‘spyware’:
Microsoft’s new AI assistant, Copilot, has faced
criticism
for its “Recall” feature, which records screen activity to allow users to revisit tasks. Critics argue it resembles spyware, raising concerns about privacy and potential misuse if a device is lost or accessed by authorities.
These are just a few of the examples that exemplify the bias and data privacy risks in the AI industry. What’s more worrying is that more and more people are adopting AI tools, increasing the opportunity for big tech to continue pushing its agenda at a more effective level given the amount of new data being created on a daily basis through generative AI tools.
Decentralizing AI for a More Democratic Future
How can more stakeholders be involved in the development of AI? Well, there’s the already established ‘corporate style’ which involves buying stocks of the companies that are already in the game. But that wouldn’t solve the centralization problem since in most cases the board of directors ends up making the key decisions.
Luckily, we now have advanced technologies like blockchain which are introducing a level playing field for everyone. Ideally, blockchain infrastructures are designed as permissionless platforms that allow anyone to participate in the development of the ecosystem and the governance process. This is different from the corporate approach where only a few votes can easily decide the fate of a project or company.
To provide some more perspective, let’s take the example of
Qubic
; this one of the few Layer 1 blockchains that is building for the AI market. The blockchain leverages what is dubbed Useful Proof of Work (uPoW) mechanism; unlike Bitcoin’s PoW which uses energy to solely secure the network, Qubic’s uPoW directs some of its computational power to AI productivity tasks, including the training of neural networks.
In addition to its Layer 1 infrastructure, Qubic is also developing a democratized AI layer ‘
Aigarth
’ which will tap into the vast amount of data from miners to create advanced neural networks whose training data is not biased.
More importantly, the governance structure of this AI-oriented blockchain is based on a decentralized autonomous organization (DAO) model. What this means is that the network’s computors (contributors) have a direct say in the development of Qubic’s AI ecosystem which can be exercised through a quorum voting.
Although still a nascent area of development, it is intriguing to observe that blockchain-based AI innovations such as Qubic have been gaining significant traction. In fact, the latest
report
by DApp Radar revealed that AI DApps now account for a larger share of on-chain activity at 28% while blockchain games have fallen to 26%.
Wrap Up
The AI revolution is only just beginning, and while it brings impressive advancements, it also raises critical concerns around centralization, bias, and data privacy. As AI tools become more integrated into daily life, the need for greater transparency and accountability becomes even more pressing. Blockchain offers a potential solution by decentralizing AI development and governance, ensuring that control is not concentrated in the hands of a few entities.
Subscribe to Updates
Get the latest creative news from FooBar about art, design and business.