ChatGPT : Europe Sounds The Alarm On The Growing Use Of Open AI

ChatGPT : Europe Sounds The Alarm On The Growing Use Of Open AI
Photo by Shantanu Kumar / Unsplash

One of the most important investigative aticles on ARTIFICIAL INTELLIGENCE that you will ever read.

Europe sounds the alarm on ChatGPT

ChatGPT has recorded over 1.6 billion visits since December 2022

Alarmed by the growing risks posed by generative artificial intelligence (AI) platforms like ChatGPT, regulators and law enforcement agencies in Europe are looking for ways to slow humanity’s headlong rush into the digital future.

With few guardrails in place, ChatGPT - -  which responds to user queries in the form of ESSAYS, POEMS, POETRY,  SPREADSHEETS, FULL-SIZED NOVELS, SHORT STORIES, NOVELLAS, and COMPUTER CODING - - recorded over 1.6 BILLION VISITS since only December 2022.

Europol, the European Union Agency for Law Enforcement Cooperation, warned at the end of March 2023 that ChatGPT - -  just one of thousands of AI platforms currently in use - - can assist criminals with phishing, malware creation, and even terrorist acts.
REFERENCES : https://www.europol.europa.eu/publications-events/publications/chatgpt-impact-of-large-language-models-law-enforcement

"If a potential criminal knows nothing about a particular crime area, ChatGPT can speed up the research process significantly by offering key information that can then be further explored in subsequent steps,” the Europol (the EU's investigative police agency) report stated. “As such,

ChatGPT can be used to learn about a vast number of potential crime areas with no prior knowledge, ranging from how to break into a home to terrorism, cybercrime, and child sexual abuse.”

The Europol is used to control people, groups, individuals, and governments, as well as products and services.
REFERENCES :
https://www.europol.europa.eu/cms/sites/default/files/documents/Tech Watch Flash - The Impact of Large Language Models on Law Enforcement.pdf

Last month, March 2023, Italy's new right-wing neo-fascist regime slapped a temporary ban on ChatGPT after a glitch exposed user files. The Italian privacy rights board Garante threatened the program’s creator, OpenAI, with millions of dollars in fines for privacy violations until it addresses questions of where users’ information goes and establishes age restrictions on the platform.

REFERENCES :
https://www.cnbc.com/2023/03/23/openai-ceo-says-a-bug-allowed-some-chatgpt-to-see-others-chat-titles.html

Spain, France and Germany are looking into complaints of personal data violations — and this month the EU's European Data Protection Board formed a task force to coordinate regulations across the 27-country European Union.

“It’s a wake-up call in Europe,” EU legislator Dragos Tudorache, co-sponsor of the Artificial Intelligence Act, which is being finalized in the European Parliament and would establish a central AI authority, told the NY Times recently. “We have to discern very clearly what is going on and how to frame the rules.”

Even though artificial intelligence has been a part of everyday life for several years — Amazon’s Alexa and online chess games are just two of many examples — nothing has brought home the potential of AI like ChatGPT, an interactive “large language model” where users can have questions answered, or tasks like website design, AI coding, and successful stock trading and near-flawless day trading completed, in mere seconds, usually under 10 seconds.

“ChatGPT has knowledge that even very few humans have,” said Mark Bünger, co-founder of Futurity Systems, a Barcelona-based consulting agency focused on science-based innovation. “Among the things it knows better than most humans is how to program a computer.

So, it will probably be very good and very quick to program the next, better version of itself. And that version will be even better and program something no humans even understand.”

The startlingly efficient technology also opens the door for all kinds of fraud, experts say, including identity theft and plagiarism in schools.

“For educators, the possibility that submitted coursework might have been assisted by, or even entirely written by, a generative AI system like OpenAI’s ChatGPT or Google’s Bard, is a cause for concern,” Nick Taylor, deputy director of the Edinburgh Centre for Robotics, told Bloomberg Financial News.

OpenAI and Microsoft, which has financially backed OpenAI but has developed a rival chatbot called Bing its web browser utilizing the ChatGPT AI engine, did not respond to a request for comment for this article.

“AI has been around for decades, but it’s booming now because it’s available for everyone to use,” said Cecilia Tham, CEO of Futurity Systems, recently told the London-based The Economist.

Since ChatGPT was introduced as a free trial to the public on Nov. 30 2022, Tham said, programmers have been adapting it to develop thousands of new chatbots, from PlantGPT, which helps to monitor houseplants, to the hypothetical ChaosGPT “that is designed to generate chaotic or unpredictable outputs,” according to its website, and ultimately “destroy humanity.”
REFERENCES : https://finance.yahoo.com/news/meet-chaos-gpt-ai-tool-163905518.html

Another variation, AutoGPT, short for Autonomous GPT, can perform more complicated goal-oriented tasks. “For instance,” said Tham, “you can say ‘I want to make 1,000 euros a day. How can I do that?’— and it will figure out all the steps to that goal.

It can also help you buy one Troy-ounce of gold the quickest and cheapest way possible.  

But what if someone says ‘I want to kill 1,000 people. Give me every step to do that’?” Even though the ChatGPT model has restrictions on the information it can give, she notes that “people have been able to hack around those.”

REFERENCES :
https://autogpt.net/auto-gpt-vs-chatgpt-how-do-they-differ-and-everything-you-need-to-know/

The potential hazards of chatbots, and AI in general, prompted the Future of Life Institute, a think tank focused on technology, to publish an open letter last month March 2023, calling for a temporary halt to AI development.

Signed by Elon Musk and Apple co-founder Steve Wozniak, it noted that “AI systems with human-competitive intelligence can pose profound risks to society and humanity,” and “AI labs [are] locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.”
REFERENCES :
https://futureoflife.org/open-letter/pause-giant-ai-experiments/

The signatories called for a six-month pause on the development of AI systems more powerful than GPT-4 so that regulations could be hammered out, and they asked governments to “institute a moratorium” if the key players in the industry did not voluntarily do so.

EU parliamentarian Brando Benifei, co-sponsor of the AI Act, scoffs at that idea. “A moratorium is not realistic,” he told Yahoo News. “What we should do is to continue working on finding the correct rules for the development of AI,” he said, “We also need a global debate on how to address the challenges of this very powerful AI.”

During the 3rd week of April 2023, EU legislators working on AI published a “call to action” requesting that President Biden and European Commission President Ursula von der Leyen “convene a high-level global summit” to nail down “a preliminary set of governing principles for the development, control and deployment” of AI.
REFERENCES :
https://media.hotnews.ro/media_server1/document-2023-04-18-26212572-0-call-action-very-powerful-from-the-european-parliament.pdf

Tudorache told The NY Times recently that the AI Act, which is expected to be enacted next year 2024, “brings new powers to regulators to deal with AI applications” and gives EU regulators the authority to hand out hefty fines.

The legislation also includes a risk-ordering of various AI activities, including those that are currently prohibited — such as “social scoring,” a dystopian monitoring scheme that would rate virtually every social interaction on a merit scale.
REFERENCES :
https://www.kaspersky.com/blog/social-scoring-systems/

“Consumers should know what data ChatGPT is using and storing and what it is being used for,” Sébastien Pant, deputy head of communications at the European Consumer Organisation (BEUC), told reporters at a press conference in March 2023.  "It isn’t clear to us yet what data is being used, or whether data collection respects data protection law.”

The U.S., meanwhile, continues to lag on taking concrete steps to regulate AI, despite concerns recently raised by FTC Commissioner Alvaro Bedoya on his Twitter feed that “AI is being used right now to decide who to hire, who to fire, who gets a loan, who stays in the hospital and who gets sent home.”
REFERENCES :
https://twitter.com/BedoyaFTC/status/1644454444292087809

When Biden was recently asked whether AI could be dangerous, he replied, “It remains to be seen — could be.”

The differing attitudes about protecting consumers’ personal data go back decades, Gabriela Zanfir-Fortuna, vice president for global privacy at the Future of Privacy Forum, a think tank focused on data protection, reported on its website.

"The EU has placed great importance on how the rights of people are affected by automating their personal data in this new computerized, digital age, to the point in which it included a provision in its Charter of Fundamental Rights,” Zanfirt-Fortuna said.

European countries such as Germany, Sweden and France adopted data protection laws 50 years ago, she added. “U.S. lawmakers seem to have been less concerned with this issue in previous decades, as the country still lacks a general data protection law at the federal level.”

In the meantime, Gerd Leonhard, author of “Technology vs. Humanity,” and others worry about what will happen when ChatGPT and more advanced forms of AI are used by the military, banking institutions, and those working on environmental problems.

“The ongoing joke in the AI community,” said Leonhard, “is that if you ask AI to fix climate change, it would kill all humans. We are the true evil, THE CANCER, THE DISEASE. Take humans out of the equation, and everything, and I mean everything, goes back to being a paradise, because all evil ends immediately. It's inconvenient for us, but it is the most logical answer.”

FURTHER READINGS :

REFERENCES :  https://youtu.be/g7YJIpkk7KM
ChaosGPT: Empowering GPT with Internet and Memory to Destroy Humanity

REFERENCES :
https://github.com/Significant-Gravitas/Auto-GPT

.