Singapore
This article was added by the user . TheWorldNews is not responsible for the content of the platform.

AI buzzwords and what they mean

Artificial intelligence promises a world of personalised products and news feeds and services that are cheaper, faster and free from human error. PHOTO: REUTERS

Whether you are worried about being replaced by a machine, or just intrigued by the possibilities, here are some frequently used artificial intelligence (AI) buzzwords and what they actually mean. 

Artificial intelligence

This broad term refers to the use of technology to model human intelligence. AI promises a world of personalised products and news feeds and services that are cheaper, faster and free from human error.

For example, factory managers or transport network operators could use it to make better use of their engineers’ time and spot component failures before they happen.

Computer scientist John McCarthy coined the term in the 1950s, but the field did not take off in earnest until this century, when technology giants such as Google, Meta and Microsoft combined vast computing power with deep pools of user data.

While AI can show human-like abilities in data processing or conversation, the machines do not yet “understand” what they are doing or saying. They are still relying essentially on algorithms.

Algorithms

An algorithm is a step-by-step process used to solve a problem. Take an input, apply some logic and you get an output. Humans have been using algorithms to solve problems for centuries.

Some financial analysts spend their careers building algorithms that are able to predict future events and help them to make money. Our world runs on these “traditional” algorithms, but recently there has been a shift towards “machine learning”, which builds on these ideas.

Machine learning

This is the process of feeding data into algorithms so they get more refined and sophisticated over time. It allows a computer to “learn” without necessarily having to be trained on the specifics of the job at hand.

Take the iPhone photo app. Initially, it does not know what you look like. But once you start tagging yourself as the face in photos taken across many years and in a variety of environments, the machine “learns” to recognise your face.

Natural language processing (NLP)

This is a branch of AI that helps computers to understand, process and generate speech and text the way a human would.

NLP relies on machine-learning algorithms to extract data from written text, translate languages, recognise handwritten words and discern meaning and context.

It is the underlying technology that powers virtual assistants such as Siri or Alexa and allows them to not only understand requests but also respond in natural language.

NLP can also gauge emotion in text, which is why if you tell Siri “I’m sad” it will suggest you call a friend or loved one.

Other everyday applications include email spam filtering, web search, spell checking and text prediction. 

Chatbots

Known as chatterbots in the 1990s, these are products like ChatGPT that can hold advanced, human-like conversations with people about anything from historical trivia to lists of creative recipes using a watermelon.

An early example are the tools used by companies on their “Contact Us” pages as a first line of defence when a customer needs help.

These are relatively unsophisticated and limited in their conversational abilities, much like voice-activated virtual assistants. It is expected that chatbots will rapidly improve as a result of recent advances in AI. 

Computer vision

A field of AI that allows computers to scan visual information such as images and video, identifying and classifying objects and people. The systems can react to what they see and take or recommend a particular action.

The technology is being used to track wildlife for conservation and guide autonomous vehicles. There has been concern about its use in military operations and policing, where it has been shown to exhibit racial bias and to lack the precision needed to reliably identify a particular person. 

Deep learning

This is the most common form of AI, in which software is taught to classify something such as a video or a loan application from a very large set of labelled data. The age-old computing adage “garbage in, garbage out” applies here.

If the training data is not good – it does not reflect the real world or it incorporates human biases – the AI will not work as intended or will contain those biases.

Generative AI

This refers to the production of creative works – pictures, music, text, poetry, sea shanties – from simple prompts.

It encompasses the likes of Stability AI’s Stable Diffusion, or OpenAI’s Dall-E, which can create elaborate and detailed imagery in seconds.

Google also developed (but has not released) a system to make music from keywords. These kinds of systems are not to be confused with AI used by companies like Adobe in its Photoshop product, for instance, as those help people to correct or improve existing images.

Instead, generative AI creates entirely new work after being trained on vast quantities of pre-existing material, something that is also generating lawsuits from artists and agencies. 

Neural networks

This is a type of AI in which a computer is programmed to learn in very roughly the same way a human brain does – through trial and error. Each success or failure reinforces future attempts and adaptations.

Just as a child’s brain learns to map neural pathways based on what it has been taught, the virtual “neurons” learn to generate responses based on the information they have been fed and what they are told is correct, although it can take thousands or even millions of attempts to become proficient.

Large language models (LLMs)

These are very large neural networks that are trained using massive amounts of text in a particular language from all over the Internet, including e-books, news articles and Wikipedia pages.

With billions of parameters to learn from, LLMs are the backbone of natural language processing that can recognise, summarise, translate, predict and generate text.

Perhaps the most well-known LLM is GPT-3, the engine that drives ChatGPT. Meta has developed LLaMA and Google has LaMDA.

GPT

A type of LLM whose initials stand for generative pre-trained transformer. The “transformer” part of GPT is the more complex of the three components and is not specific to ChatGPT creator OpenAI.

It refers to a system that can take strings of inputs and process them all at once rather than in isolation, so that context and word order can be captured.

This is important in language translation, for instance: “Her dog, Poppy, ate in the kitchen” could be translated into the French equivalent of “Poppy ate her dog in the kitchen” without appropriate attention being paid to order, syntax and meaning.

OpenAI

The San Francisco-based laboratory behind ChatGPT was co-founded as a non-profit by programmer and entrepreneur Sam Altman to develop AI technology that “benefits all of humanity”.

OpenAI shifted to create a for-profit entity in 2019, when Microsoft invested US$1 billion.

Microsoft pumped in an additional US$10 billion in January, then unveiled a new version of its Bing search engine and Edge browser that uses its tech.

Rival AI labs include Hugging Face, which is working with Amazon, and Chinese giant Baidu, which said it is preparing to roll out Wenxin Yiyan, or Ernie Bot in English, and will directly compete with OpenAI.

Anthropic, a start-up closely partnered with Google, is testing a ChatGPT competitor called Claude. BLOOMBERG