Opinion: ChatGPT like uncontrolled AI can be perilous for Indian society far beyond any other technology in modern times
By Rohit Mittal & Jaijit Bhattacharya
The CEO of OpenAI, a generative Artificial Intelligence (AI) startup, is reported to have stockpiled guns, gold and cartons of antibiotics to tide over any doomsday that may hit India if generative AI goes wrong.
The point is: what can go wrong?
For the time being, most of us are focused on the positives of AI and the transformative changes it can bring about.
For the record, generative AI is defined by Large Language Models (LLMs). LLMs are a type of AI technology that uses deep learning algorithms to analyse and understand natural language. These models are typically trained on large datasets of text – such as books, articles and other written materials – to learn the patterns and structure of human language.
The goal of LLMs is to enable AI systems to understand and generate human-like language which then can be used in a variety of applications, including natural language processing, machine translation and text generation. So far, LLMs have been a success story and have witnessed widespread adoption. AI bot ChatGPT reached an estimated 100 million active monthly users within a mere two months of its launch, making it the fastest-growing consumer application in history.
Good and Bad
Obviously, LLMs hold tremendous benefits. Large language models can improve natural language processing and enable more effective communication between humans and machines. These models can enable the development of better voice recognition and text-to-speech technologies, which can improve accessibility for people with disabilities.
According to the World Health Organisation (WHO), there are an estimated 70 million people with disabilities in India and they can all benefit greatly from these technologies. LLMs can also help improve machine translation, allowing people to communicate seamlessly with individuals who speak different languages.
So far so good
But then, LLMs also hold major threats to Indian society. Large language models can perpetuate and amplify existing biases and stereotypes in the data they are trained on. India is a very diverse country. Imagine one group perpetuating stereotypes about the other group across media platforms. These models can also be used to generate fake news and manipulate public opinion.
The dangers are more real in a democratic society like India where it is easier to share fake news and get away with it.
The development of LLMs can lead to a concentration of power and control in the hands of a few large companies. The use of large language models can also result in privacy concerns as they collect and process vast amounts of personal data.
None of these augurs well for any society, India included. The end result could be chaos and increased strife. This possibly explains why a palpably paranoid AI startup CEO has stockpiled what he considers essentials to pull through such trying times.
These challenges of AI captured above, and much more that have not been covered here, require preventive measures. India has already come out with position papers and a well-crafted National Strategy for Artificial Intelligence which focuses on fostering innovation and collaboration with the AI industry in five specific areas. Ministry of Electronics and Information Technology has already constituted four committees on AI that cover almost all issues related to AI, including, ethical and legal issues. It would be important to extend the scope of the committees to also look at “doomsday” scenarios so that India is prepared for any eventuality.
To amplify the issue, when Russia invaded Ukraine on the 24th of February, 2022, instead of the usual aerial power being used to degrade adversary capabilities, Russia actually unleashed a cyberattack on February 23rd. Fortunately, Ukraine had the capabilities to defend against the cyberattack and was able to get its banks, communication networks and so on, back on track within six hours. What if the next cyberattack is created using learning algorithms that allow the attacking vector to modify and bypass the cyber defences? Are we ready to defend ourselves from such attacks?
India is taking definitive steps to create the data ecosystem and to invest in domestic AI research and development to build our own capabilities and expertise in the field. This will enable us to create and control our own AI systems and algorithms, to ensure technological sovereignty in AI.
Incidentally, IDC predicts that China’s AI investments are expected to reach US$26.69 billion in 2026, accounting for about 8.9 percent of global investment.
In addition, Peking University in China has introduced the country’s first undergraduate course in AI in 2004. Since then, 30 other universities therein China have introduced similar courses.
What India has already managed is remarkable. Against 7,000 startups and $5 billion in venture capital funding in 2015, India in 2022 registered 60,000 startups and $50 billion-plus VC funding. As against 8 unicorns in 2015, India in 2022 had 108 unicorns. Many of them are AI-based startups. India would need to accelerate startups with a focus on AI, and leverage them to create defences against weaponized AI of adversaries.
Uncontrolled AI can be perilous for Indian society far beyond any other technology in modern times. And needless to say, the weaponization of AI can have a severe impact on India. Thanks to the initiatives of the Indian government, startups in India will get access to significant data. They already have access to capital. The time is ripe to put together teams that leverage the ecosystem created in India, to work on defensive AI systems, along the lines of the Manhattan project that led to the creation of atomic weapons.
(The authors, Rohit works at Google, building the next-generation AI microprocessors and Jaijit is the President of the Centre for Digital Economy Policy; views are personal.)