ChatGPT-5 is expected to come soon as the next version of OpenAI’s GPT-4, an AI language model that was released last month. But OpenAI’s CEO and co-founder Sam Altman confirmed as they were not currently training a GPT-5 language model.
Some tech experts fear ChatGPT-5 could potentially reach Artificial General Intelligence (AGI) and become nearly indistinguishable from a human’s ability to generate natural language responses. However, OpenAI’s CEO Sam Altman has recently confirmed that the company is not trying to train GPT-5 and is doing other things on top of GPT-4 that have safety issues to address.
Sam Altman made these comments on ChatGPT-5 while attending an event at the Massachusetts Institute of Technology (MIT) on 15th April 2023. He was speaking virtually about the threats posed by AI systems and the safety issues of GPT-4. He also reacted to the open letter signed by Elon Musk and other tech experts that called for a pause on AI development. Sam Altman, said that the letter lacked technical nuance and that OpenAI was not training GPT-5 and won’t be for some time.
ChatGPT has faced some regulatory challenges in Europe, where some countries have banned or investigated the chatbot for potential data breaches.
AI treats to Humanity
AI systems can pose various threats to society and humanity, depending on how they are designed, used and regulated. Some of the possible threats are:
Job losses: AI can replace many human jobs in various fields such as marketing, manufacturing, healthcare and transportation. This can lead to unemployment, inequality and social unrest.
Privacy violations: With AI automation, it can collect, process and analyze any amount of personal data, which can be used for surveillance or exploitation by any individuals or hackers.
Deepfakes: AI can generate realistic images but fake, and even it can generate videos or audio of any people or events, which can be used to spread misinformation or defamation.
Socioeconomic inequality: Big players with AI technology can create a huge wealth– Power in the hands of a few people who control AI technology, which can create Socioeconomic inequality.
Market volatility: AI can disrupt existing markets and industries by creating new products, services or business models with no matter of time. This can be more efficient or innovative which normal people can’t do in limited time constraints. This can create uncertainty, instability and competition for existing players.
Weapons automation: AI can enable the development of autonomous weapons that can operate without human oversight or control. This can create concerns such as accountability, responsibility and escalation.
Loss of control: One day or other, AI can potentially surpass human intelligence and capabilities, creating a scenario where humans are under AI. This can pose existential risks for human intelligence, values or goals.