Skip to main content

Featured

Is The Future of Rocket Propulsion Chemical or Nuclear?

How far away is Artificial General Intelligence?

 Introduction 

On the 30th of November 2022, ChatGPT was launched. This development, the first of its kind, revolutionised the world and kickstarted the AI boom. Now, in 2024, everyone knows what AI is, and not only that, but AI is being used by everyone. It is a tool that can do many things, often better than humans, and much faster. It can write and review writing or code, it can answer questions (often more accessible than answering questions on Google), it can help you make plans for businesses, travel, etcetera. It is, without doubt, one of the most influential developments of the 21st century. Yet the AI we are currently familiar with is only the first phase of widely available AI. In the future, the potential of AI will be immense, allowing it to be as useful, if not more so, than humans in terms of skill and economic output in many areas. The next stage in AI is called AGI.

AI vs AGI 

The AI we are all currently familiar with is known as ANI (artificial narrow intelligence). ANI operates off of ‘basic’ machine learning algorithms which are designed to be fed large amounts of data, typically off the internet, and use this to learn how to perform specific tasks. For example, when you ask ChatGPT to write an essay, it can do so because it has learned about essay structure, paragraph organisation, vocabulary, and relevant content from the data it has processed. ANI is generally useful for carrying out specific and often repetitive tasks where it can identify patterns, imitate and analyse data, etcetera. The most common example of this is generative AI such as ChatGPT (which typically imitates data from a collection of sources). But other common examples of ANI include manufacturing robots, voice-recognition, email scam detection, etcetera. On the other hand, AGI advances to the next stage in that it can reach human level cognitive ability and apply itself in a more broad and original way. This would be seen in an ability to perform reasoning and problem-solving at a human level, such as in mathematics. As well as that, it would be able to provide more original and creative insights, and be able to develop a deeper and more real intuition and emotional intelligence. So how is this developing?

Current AGI Research and Technology 

There are many companies involved in research and development toward AGI, most notably OpenAI, DeepMind and Norn.ai. OpenAI is the most well-known in the AI space and is moving toward AGI as its GPT models continually progress. ChatGPT is becoming more capable in a variety of ways and is now able to understand and reply to images, audio, code, video and text. Therefore it is advancing further in its ability to understand and interpret external data in a more accurate and well-informed way. ChatGPT is also becoming more nuanced and accurate in its responses and this continual development will push it further toward human-like interactions and intelligence. In cognitive domains such as creativity and deductive reasoning, DeepMind (owned by Google) is making significant strides. One of their latest models, AlphaProof, recently has been successful in completing the International Mathematics Olympiad at a silver-medal standard (and AlphaGeometry has been similarly successful as an earlier model focused on geometry). It has succeeded in this by training itself on an array of mathematical theorems and using this to predict the most probable next step as it goes about its proofs (DeepMind has also had notable past successes in chess and go. Another notable company is Norn.ai who are developing AGI’s which should mimic human intelligence and cognition to solve complex real-world issues. Recently, it has been working with the government of Aruba to help diversify its economy. Overall, progress is being made toward AGI in many ways and promises to be highly useful.

Challenges with AGI Development

Current ChatGPT models can carry out human-like conversations but it lacks the genuine understanding. Instead, it mimics human interaction based on its training data, and there are some similarities between this and humans. When we are children we mostly act by imitating other people and their actions. For example, when babies first learn to talk, that is because they are exposed to people, mostly their parents, talking and they learn how to refer to things. As children get older, develop further and learn more about the world, they begin thinking for themselves as well as understanding and inferring things about the world based on an intuition they build. This is analogous with AI. Current AI models are like children that are still developing – once they mature they have reached AGI. This will mostly be seen in AGI’s as they gain the ability to use human intuition to make decisions even if they don't seem optimal (possibly acting on its own moral values). Another way would be in being able to understand contextual inferences and respond to nuances such as sarcasm. Building an AGI with the ability to achieve all of these things will likely happen from continual development and exposure to data (externally and from training data) just as a child matures as it is longer exposed to the world. This continual development will also have to be perfected as it matures to eliminate misinformation and hallucination problems. If we are to have AGIs at a large scale performing important real-world tasks at a human level, they cannot be making mistakes. Small problems in its development may lead to the AGI disagreeing with its sources and spreading nonsense or misinformation. Biases and subjectivity, in any form, may also lead to the AGI to spread misinformation or act wrongly as it develops at a large scale.
 
On the side of training and maintaining the AGI, there will be (and already is) increased demand for power, coolants and chips such as GPUs. At the moment, one ChatGPT prompt uses 2.9 watt-hours, roughly 10 times that of a Google search. As well as this, it also requires a vast amount of power to train it, so we will need a far greater power supply as AI becomes more power hungry and widely used. This can be solved by nuclear (I’ll go into this in another article) but the struggle is just getting nuclear power to become widely adopted. Maintaining AI creates significant heat, so it is also necessary to have a powerful and reliable cooling system. At a moment AI data centres are being cooled by either water or air. Yet these pose challenges themselves as using air as a coolant is inefficient and energy-intensive while using water demands high infrastructure costs (piping, etc) and may also stress local water supplies. So to solve this problem we will need to develop more efficient cooling systems (which is already being heavily focused on). There is also the hardware demand of AI data centres. Last year a report from TrendForce estimated that ChatGPT will require up to 30, 000 Nvidia GPUs to operate. This means that significant investment from companies such as Nvidia will need to go into manufacturing large amounts of GPUs as well as working on making them more efficient. By working on solving these problems, we can push toward developing a real AGI.

Ethical Considerations of AGI 

In developing AGI, we are working on creating something that is, in its abilities, like a human but in its fundamental nature not so. This is something which raises many ethical considerations as to how we must treat AGI and how we value ourselves. AGI promises to reach the level of humans in many cognitive domains, such as creativity, reasoning, problem-solving and many others. As well as this, it offers to be able to do this in a much more efficient manner (as can be seen by the speed of generating responses by generative AI) and also does not have limitations in situations where it may face danger, etcetera. This offers an awesome potential, where we may be able to do so much more and so much more efficiently. Yet it comes with its drawbacks, as people may well be replaced by it. As I covered in a previous article (How will AI take your job? ) AI will replace humans in jobs that involve tedious or repetitive tasks such as fast food workers, bank tellers, office clerks, etcetera. This may lead to mass unemployment and a lack of opportunities without the necessary intelligence and creativity to keep up. We humans as a collective must work to help these people threatened by AGI. For example, AGI will produce enormous economic output and we must make sure not to allow all of the money it produces be given to a small group of rich people. Instead it must be spread out among everyone to improve, and not degrade, our quality of life. Another risk of AI is the possibility of it negatively influencing our world, possibly in a totalitarian manner. If AI is given certain protocols or biases and is led to believe that they must be followed, this may lead to it carrying out its actions wrongly. For example, in “I, Robot”, the AI sees the only way to save humanity from their own self-destruction is to take control of them and take away many of their freedoms. To make sure that AGI does not cause harm to humanity, we must make sure to safely regulate the development of AI. So what use can AGI have when safely regulated?

AGI Applications and Future Technologies 

As I previously mentioned, AGI carries an enormous potential, and this potential can be implemented in a wide array of applications. Current AI already performs well in specific and repetitive tasks and its ability in these areas will always be there. But as current ANI evolves into AGI it will be able to be used to automate a variety of tasks and jobs, particularly those that are tedious and repetitive (in all sorts of industries from finance to medicine to business, etc.). A great application of this will be in fact checking and verifying media and other AIs for misinformation, etcetera. In media, AI will be able to detect misinformation and biases, as well as fake content which is generated by other AIs. The use of this is undeniable as it will make the media a much more honest and authentic place if it is more accurately and thoroughly verified (although this could also be used for bad intentions, so it must be properly regulated). AGI will also be able to critique other AIs, particularly during their training, as it will do a much more thorough job than humans in identifying flaws in its system. It will also be better equipped to link these flaws together and generalise them into a few areas which may need further development. As well as this, AGIs ability to adapt and learn at a more human-level, as well as applying what it learns in creative and intuitive ways, will open up many other areas of application. For example, AGI will be the ideal personal or business assistant as it will be able to take care of tedious things, come up with solutions and insights, as well as adapt to the demands placed upon it by whoever it is assisting. Another application where its ability to adapt and become human-like may be advantageous will be in video games where main characters could be AI characters who can become more like real human characters. So there is no doubt that AGI offers great potential, but when will it be available?

How far away is AGI? 

The exact time when we will be able to say we have AGI is unclear, as the line between ANI and AGI is somewhat hazy and there are many separate areas where we will notice the transition. There are many tests and criteria determining whether an AI is AGI and in some of these it has succeeded, but in others it is yet to do so. One test which has not yet been passed by an AI, the coffee test, involves entering a typical home and making a cup of coffee. This involves finding coffee and adding water, finding cups and then using a coffee machine to brew the coffee. But there are other areas where AI is superior to humans as it is faster and more capable, as can be seen in many areas of what ChatGPT can do. For example, although a script written by ChatGPT is not at all perfect, it can very quickly generate a very good outline or draft for a script significantly more efficiently than humans can. I would say that we cannot define AGI by a clear line that it must pass, but rather it is a phase of AI development with a blurry outline, but it is one that we are already entering. Over the next couple decades, as AI continues to develop, we will identify more and more characteristics of AGI and then we will be able to say that we are at the stage of AGI.

How do you see it? Comment your take👇

Comments