Decentralization could help humanity to avoid an AI doomsday scenario

For a moment, consider how many historical facts we accept as truth without questioning their validity. What if, for example, a seventh-century book detailing an important battle was actually rewritten by someone from the ninth century? Perhaps a 9th-century leader had a scribe rewrite the account to serve their political or personal aspirations, allowing them to wield more power or forge a legacy based on a false pretext.

Of course, I’m not proposing that commonly accepted historical facts are false or manipulated. Still, it does highlight the difficulty of verifying historical data that predates the modern era, symbolizing a problem that unchecked future AI developments could bring back.

The current state of AI takes place in black-boxed silos dominated mainly by powerful entities that put us at risk of a dystopian future where truths can be rewritten. Open AI’s shift to a more closed model after promoting an open-source approach to AI development has triggered these fears and raised concerns about transparency and public trust.

If this trend becomes the dominant direction of AI, those accumulating computing power and developing advanced AI technologies and applications can create alternative realities, including forging historical narratives.

The time is now for AI regulation

As long as centralized entities hide their algorithms from the public, the combined threat of data manipulation and its ability to destabilize the political and socioeconomic climate could truly alter the course of human history.

Despite numerous warnings, organizations across the globe are sprinting to use, develop, and accumulate powerful AI tools that may surpass the scope of human intelligence within the next decade. While this technology may prove useful, the looming threat is that these developments could be abused to clamp down on freedoms, disseminate highly dangerous disinformation campaigns, or use our data to manipulate ourselves.

There is even growing evidence that political operatives and governments use common AI image generators to manipulate voters and sow internal divisions among enemy populations.

News that the latest iOS 18’s AI suite can read and summarize messages, including email and third-party apps, worries many about Big Tech accessing chats and private data. So, it raises the question: Are we stepping into a future where bad actors can easily manipulate us through our devices?

Not to fear-monger, but suppose the development of AI models is left at the mercy of massively powerful centralized entities. It’s easy for most of us to imagine this scenario going completely off the rails, even if both governments and Big Tech believe they’re operating in the interest of the greater good.

In this case, average citizens will never gain transparent access to the data used to train rapidly progressing AI models. And since we can’t expect Big Tech or elements of the public sector to voluntarily be held accountable, establishing impactful regulatory frameworks is needed to ensure AI’s future is ethical and secure.

To oppose corporate lobbies seeking to block any regulatory action on AI, it's on the public to demand politicians implement necessary regulations to safeguard user data and ensure AI advancements are developing responsibly while still fostering innovation.

California is currently working on passing a bill to reign in AI’s potential dangers. The proposed legislation would curb using algorithms on children, require models to be tested for their ability to attack physical infrastructure, and limit the use of deep fakes—among other guardrails. While some tech advocates worry this bill will hinder innovation in the world’s premier tech hub, there are also concerns that it doesn’t do enough to address discrimination within AI models.

The debate around California’s legislation attempts show that regulations alone aren’t enough to ensure future AI developments can’t be corrupted by a small minority of actors or a Big Tech cartel. This is why decentralized AI, alongside reasonable regulatory measures, provides humanity with the best path toward leveraging AI without fear of it being concentrated in the hands of the powerful.

Decentralization = democratization

No one can predict where exactly AI will take us if left unchecked. Even if the worst doomsday scenarios don’t materialize, current AI developments are undemocratic, untrustworthy, and have been shown to violate existing privacy laws in places like the European Union.

To prevent AI developments from destabilizing society, the most impactful way to correct AI’s course is to enforce transparency within a decentralized environment using blockchain technology.

But the decentralized approach doesn’t just facilitate trust through transparency, it can also power innovation through greater collaboration, provide checks against mass surveillance and censorship, offer better network resilience, and scale more effectively by simply adding additional nodes to the network.

Imagine if blockchain’s immutable record existed during biblical times, we might have a bit more understanding and context to analyze and assess meaningful historical documents like the Dead Sea Scrolls. Using blockchain to allow wide access to archives while ensuring their historical data is authentic is a theme that has been widely opined.

Centralized networks benefit from the low coordination costs between participants because most of them operate under a single, centralized entity. However, decentralized networks benefit from compensating for the higher costs of coordination. This means higher rewards for more granular market-based incentives across the compute, data, inference, and other layers of the AI stack.

Effectively decentralizing AI starts with reimagining the layers that comprise the AI stack. Every component from computing power, data, model training, fine-tuning, and inference must be built in a coordinated fashion with financial incentives to ensure quality and wide participation. This is where blockchain comes into play, facilitating monetization through decentralized ownership while ensuring transparent and secure open-source collaboration to counter Big Tech’s closed models.

Any regulatory action should focus on steering AI developments toward helping humanity reach new heights while enabling and encouraging AI competition. Establishing and fostering responsible and regulated AI is most efficient when done in a decentralized setting because its distribution of resources and control drastically reduces its corruptible potential — and this is the ultimate AI threat we want to evade.

At this point, society recognizes the value of AI as well as the multiple risks it offers. Going forward, AI development must strike a balance between enhancing efficiency while accounting for ethical and safety considerations.

Peter Ionov is a guest author for Cointelegraph and a Ukrainian entrepreneur, engineer, and inventor who leads and runs several companies: Robosoft, GT protocol, and Ukr Reklama. Peter is paving the way in Web4, robotics, and artificial intelligence. He is known for his ambitious vision of the future of humanity, which includes the idea of "work for robots, rest for people" and the development of technologies that improve people's lives and stability.

Source