Table of contents
Geoffrey Hilton, the Godfather of AI, just left Google to promote awareness about the risks of artificial intelligence, claiming that the technology's threat to the world is more urgent and risky than "climate change."
According to Reuters (2), Geoffrey Hinton continued to raise the alarm about AI after leaving Google last week and stating his departure in an interview with the New York Times and further detailing that his intentions are not to disarm or devalue climate change but rather to warn that the AI threat must be addressed urgently due to the potential impact it holds in comparison to climate change.
"With climate change, it is very easy to recommend what you should do, which is to stop burning carbon, and doing so would ultimately improve things. In contrast, it is still uncertain what should be done with AI," he stated.
Given ChatGPT's popularity, tech titans have competed for artificial intelligence (AI) power. However, Hinton's former workers developed "Bard," a ChatGPT-like AI program demonstrating their AI prowess.
It has been heavily criticized by Google employees, who have cautioned that the technology is harmful. Hinton indicated that he left Google only to speak publicly and freely about AI without damaging Google, for which he is deemed to have done a "responsible move."
Hinton is concerned that AI may help spread erroneous information and eliminate human jobs. According to Goldman Sachs research (4), AI might affect up to 300 million full-time jobs worldwide.
Hinton questioned whether it was too late to set restrictions on AI's rapid growth, saying, "I console myself with the usual excuse, and if I hadn't done it, somebody else could have done it, as it is difficult to see how you can prevent bad actors from using it for bad things."
In 1986, he coauthored a study titled "Learning representations by backpropagating errors," a watershed moment in creating artificial neural networks that support AI. He was additionally awarded the Turing Award in 2018 for his research contributions.
Thousands of IT professionals and prominent individuals signed an open letter in April requesting a six-month moratorium on developing more powerful systems than OpenAI's ChatGPT4. Hinton agreed with the signatory's worries but disagreed with putting the research on hold because it was not a viable answer.
"It is utterly unrealistic," he added, "and I am in the camp that thinks this is an essential risk, and it's close enough that we ought to be working very hard right now and putting a lot of resources into figuring out what we can do about it."
In response to the letter, a committee of European Union parliamentarians requested POTUS Joe Biden to hold a worldwide summit on the future direction of the technology sector with European Commission President Ursula von der Leyen.
As a result, the committee agreed to a historic series of measures aimed at generative AI, which might require AI companies to disclose any copyrighted content used to train their models. Hinton commended Biden's plan to meet with AI leaders (4), promising a "frank and constructive discussion" about the importance of companies being more open about their systems.
"The tech leaders who have the best understanding of it, as well as politicians, must be involved because it affects us all, so we must all think about it," he said. Warren Buffet, the billionaire and CEO of Berkshire Hathaway, has also equated the development of the powerful technology AI to the development of the atomic bomb (5).
As previously noted, the scathing statements come just days after AI Godfather Geoffrey Hinton warned that AI might pose a more immediate threat to humans than climate change.
"When something can do all kinds of things, and I get a little worried because I know we won't be able to uninvent it, and you know we did invent the atom bomb in World War II for very, very good reason," Buffet added.
"It was enormously important that we did so," he said, "but is it good for the next two hundred years of the world that the ability to do so has been unleashed?" "AI will change everything, everything in the world except how men think and behave," he continued.
"We didn't have a choice, but when you start something, well, Einstein said after the atomic bomb, he said, this has changed everything in the world except how men think," he went on.
"And I would say the same thing, maybe not the same thing. I don't mean that, but AI can change everything except how men think and behave, and that's a big step to take," he said.
We asked ChatGPT, "the potential threats of AI on humanity?"
However, IndiaTech questioned ChatGPT about the growing threat of AI to humanity. "As with any powerful technology, there are potential risks that must be carefully considered and mitigated," and highlighted a few potential risks, such as job displacement, etc.
The image above emphasized concerns raising bias and unbiased discrimination, cybersecurity threats, autonomous weapons, loss of privacy, and existential risks. It concluded that these threats are not unique to AI and can be minimized by responsible AI system development, deployment, and regulation.
All these elements point to the need for control, regulation, or a limit that allows for human sanity in society while not fully relying on Artificial Intelligence, which focuses on replacing people except for human creative and emotional thinking.
Meanwhile, Google has backed 2 AI-based climate-focused startups in India
According to Google, two non-profit Indian organizations, Gujarat Mahila Housing Sewa Trust and Villgro Innovation Foundation, are integrating artificial intelligence (AI), machine learning (ML), and the Internet of Things (IoT) to combat water scarcity and floods.
These organizations were among 13 local sustainability organizations chosen to receive funding from the APAC Sustainability Seed Fund, which intends to investigate novel use cases for AI, ML, and IoT models that will help manage water supply and flooding concerns nationwide.
Sanjay Gupta, Google's Country Head, and VP, stated, "India's population could be vulnerable to severe water-related challenges across its vast geographic landscape and many climatic zones if global temperatures rise by 1.5 degrees Celsius."
Fear of global warming is something that the Intergovernmental Panel on Climate Change predicts will happen within the next decade. The grant is part of Google's philanthropic arm "Google.org" and was given to the Asian Venture Philanthropy Network (AVPN) in collaboration with the Asian Development Bank.
Gujarat Mahila Housing Sewa Trust strives to empower and equip women across India to establish environmentally friendly and gender-inclusive communities by offering training and technical skills in construction, technology, and urban administration.
The award funds will be used to develop an AI-based model to improve climate resilience in Amalner, Maharashtra. It is expected to serve as a model for similar small urban cities in India.
It also intends to create an efficient framework for mitigating the effects of climate change and increasing resilience to natural catastrophes. The initiatives aim to pioneer land utilization and catchment region mapping to predict flood and inundation susceptibility and contribute to protecting natural drainage systems.
The company also backs the Villgro Innovation Foundation's work with "CultYvate" on developing a mobile and browser-based application that will use IoT sensors, satellite data, and AI/ML models.
This program will provide farmers in India with real-time insights and guidance, assisting them in properly managing irrigation and making informed decisions regarding water usage, resulting in more sustainable practices and higher crop yields.
The IT giant Google has granted $3 million to the APAC Sustainability Seed Fund, established by the Asian Venture Philanthropy Network AVPN in collaboration with the Asian Development Bank.
The project encourages local organizations to work hard to find solutions for sustainable practices and address the effects of climate change across APAC, such as heat waves, rising sea levels, and biodiversity loss.