Home » AI News » Generative Large Language Multi-Modal Models
Generative Large Language Multi-Modal Systems
Gollem: Guillermogp Guillermo García-Pimentel Ruiz

Generative Large Language Multi-Modal Models

  /  April 18, 2023


The concept of Generative Large Language Multi-Modal Models or GLLMMMs (Gollems) surfaced in a recent video from the Center for Humane Technology.

The co-founders of the Center for Humane Technology, Tristan Harris and ASA Raskin, introduce themselves and their work. They discuss the potential dangers of AI and how it is being deployed in a dangerous way.

Introducing the Co-Founders

  • Tristan Harris and ASA Raskin are the co-founders of the Center for Humane Technology.
  • They were behind the Emmy-winning Netflix documentary “The Social Dilemma,” which reached 100 million people in 190 countries in 30 languages.
  • They have advised heads of state, global policymakers, members of Congress, national security leaders on issues related to technology.
  • Their goal is to mobilize millions of people about these issues and some of the dangers that we face with technology these days.

The Potential Dangers of AI

  • AI researchers believe there’s a 10% or greater chance that humans go extinct from our inability to control AI.
  • The co-founders compare this feeling to receiving a call from Robert Oppenheimer inside The Manhattan Project during World War II. He tells you that the world is about to change in a fundamental way except it’s not being deployed in a safe and responsible way; it’s being deployed in a very dangerous way.

Understanding Artificial Intelligence

The co-founders explain what artificial intelligence (AI) is and how it works. They also discuss how difficult it can be to wrap your head around such an abstract concept.

What Is Artificial Intelligence?

  • Artificial intelligence (AI) refers to machines that can perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation.
  • Large language model AIs are capable of generating images or text based on input data without any prior knowledge or training on specific topics.

Difficulty in Understanding AI

  • The co-founders note that AI is such an abstract thing and affects so many things that it’s hard to wrap your head around how transformational it is.
  • They call the presentation a paradigmatic response to a paradigmatic technology because they want to arm all of us with maybe a more visceral way of experiencing the exponential curves that we’re about to be heading into.

Responsible Deployment of AI

The co-founders discuss the responsible deployment of AI and how we can ensure that it is used in a safe and ethical manner.

Positive Aspects of AI

  • The co-founders acknowledge that there are incredible positives coming out of AI, such as using it for animal communication decoding or language learning.

Irresponsible Deployment of AI

  • The co-founders argue that the ways in which we’re now releasing these new large language model AIs into the public are not being done responsibly.
  • They believe that people need to be concerned about this issue and figure out what responsibility looks like now.

Introduction

The speaker introduces the concept of technology responsibility and how it relates to social media.

Technology Responsibility

  • When a new technology is invented, a new class of responsibility is uncovered.
  • If that technology confers power, it will start a race. If you do not coordinate, the race will end in tragedy.
  • Social media was humanity’s first contact moment with AI.
  • The attention economy caused information overload, addiction, doom scrolling, sexualization of kids, shortened attention spans, polarization, fake news and breakdown of democracy.

Behind the Friendly Face

The speaker discusses the problems behind social media’s friendly face.

Paradigm for Social Media

  • The paradigm for social media was giving people voice and connecting them with their friends.
  • There were actual benefits such as enabling small-medium-sized businesses to reach their customers.
  • However, there were deeper problems such as addiction problem, disinformation problem and mental health issues.

Arms Race for Attention

  • There was an arms race for attention which created an engagement monster that maximized engagement by targeting the bottom of the brain stem.

Introduction

The speaker discusses the impact of social media on society and how AI is becoming increasingly entangled in our lives.

The Impact of Social Media

  • Social media has rewritten the rules of every aspect of our society.
  • Children’s identity is held hostage by social media platforms like Snapchat and Instagram.
  • National security now happens through social media, and politics and elections are run through the engagement economy.
  • It’s important to get ahead of AI before it becomes further entangled in our society.

GPT3 and AI

  • GPT3 is a new large language model that has increased AI capabilities.
  • There are real benefits to AI, such as increased efficiency, solving scientific challenges, and making money.
  • However, there are also concerns about AI bias, job displacement, transparency, and creepy behavior.

The Evolution of AI

The speaker provides an overview of the evolution of AI from 2017 to 2020.

New AI Engine

  • In 2017, a new AI engine was invented that started to change the field.
  • This engine began to rev up in 2020.

Trend Lines

  • There are many different species of AI with different capabilities.
  • The trend lines show that there is a significant increase in capabilities for certain types of AI.

The Emergence of Golem Class AI

In this section, the speaker talks about how different fields in AI used to be distinct and had different methods. However, in 2017, all these fields started to become one with the invention of Transformers. This allowed for any advance in any part of the AI world to become an advance in every part of it.

The Power of Large Language Models

  • Incremental improvements made by researchers working on different topics can now be synthesized into large language models.
  • Everything can be treated as language, including images, sound, fmri data and DNA.
  • Images can be treated as a kind of language where you predict what comes next or what is missing.
  • Sound can also be broken up into little microphonemes and predicted which one comes next.
  • Any advance in any one part of the AI world became an advance in every part because they are all just languages.

Golem Class AIS

  • These large language models that are generative are called golems.
  • They are called golems because they have emergent capabilities like those found in Jewish folklore’s idea of inanimate objects gaining their own capacities.

Examples of Golem Class AI

In this section, the speaker provides examples that demonstrate how golem class AIs work.

Translation from Language to Image

  • Google soup is translated from language into image by an AI model.
  • The image shows a mascot melting because plastic melts in soup. There is also a visual pun where the yellow of the mascot matches the yellow of the corn.

Translation from Human Beings to FMRI Data

  • Human beings are put into an fmri machine and shown images.
  • The AI model is able to predict what image they were shown based on their fmri data.

Decoding Dreams

In this section, the speaker discusses how AI can reconstruct dreams and inner monologues.

Reconstructing Inner Monologue

  • Researchers had people watch videos and reconstructed their inner monologue.
  • The AI was able to reconstruct what the person was thinking about while watching the video.
  • The speaker wonders if AI can also reconstruct a person’s inner monologue.

Paradigm of AI

In this section, the speaker talks about how the paradigm of AI is changing and its implications.

Differentiating between Siri and AI

  • The speaker differentiates between Siri or voice transcription and true AI.
  • They argue that we shouldn’t be scared of AI because it has limitations like any other technology.

Changing Paradigm of AI

  • The engine underneath the paradigm of AI is changing rapidly.
  • We still use the term “AI” even though what’s underneath has changed significantly.
  • There are no laws or ways to protect our thoughts from being decoded by new technologies.

Wi-Fi Radio Signals as Language

In this section, the speaker discusses how Wi-Fi radio signals can be used as a language to track people in a room.

Tracking People with Wi-Fi Radio Signals

  • Wi-Fi radio signals are a kind of language that can be used to identify positions and number of people in a room.
  • Cameras already exist that can track living beings in complete darkness through walls using these signals.
  • By combining an AI with these cameras, you could have an omnipresent surveillance system.

Combinatorial Compounding

In this section, the speaker talks about how different technologies can compound on each other.

GPT Writing Code

  • GPT can write code to exploit security vulnerabilities.
  • By combining this with the code of a Wi-Fi router, you could exploit its vulnerabilities.

Deep Fix

  • Deep Fix is a new technology that lets you listen to just three seconds of somebody’s voice and then continue speaking in their voice.
  • The speaker warns that these technologies can be used for malicious purposes.

The Breakdown of Content-Based Verification

In this section, the speaker discusses how content-based verification is breaking down due to deep fakes and synthetic media. He explains that it is now possible to synthesize someone’s voice with just three seconds of audio and how this will continue to improve over time.

The End of Content-Based Verification

  • Within a week, people figured out how to scam others using deep fakes.
  • All locks that are locking doors in society have been unlocked due to the breakdown of content-based verification.
  • Exponential curves mean we need to skate further than where we think we need to be.
  • This year marks the end of all content-based verification as it does not work anymore.
  • Institutions are not yet able to stand up against deep fakes.

TikTok Filters and Deep Fakes

In this section, the speaker talks about TikTok filters and how they can be used for deep fakes. He gives an example of what could happen if China were to use Biden and Trump filters on US citizens.

TikTok Filters

  • Latest TikTok filters are wild and can give you lit fillers.
  • All content-based verification breaks this year; you do not know who you’re talking to whether via audio or video.

China’s Use of Deep Fakes

  • If China wanted to screw up the US right now, they could ship a Biden or Trump filter to every citizen in the country.
  • This would turn all citizens into the most angry Biden/Trump army that just talks all day in a cacophony.
  • None of this would be illegal because there are no laws against these things.

AI and the Operating System of Humanity

In this section, the speaker discusses how AI is to the virtual and symbolic world what nukes are to the physical world. He explains how everything humans do runs on top of language and what happens when non-humans can create persuasive narratives.

Non-Humans Creating Persuasive Narratives

  • Everything humans do runs on top of language.
  • Non-humans can now create persuasive narrative that ends up being like a zero-day vulnerability for the operating system of humanity.
  • The last time we had non-humans creating persuasive narrative and myth was with religion.

Introduction

The speaker discusses the capabilities of Golem AIS and how they differ from traditional AI models.

Golem AIS Capabilities

  • The speaker suggests that Golem AIS can create synthetic media, perform A/B testing, and build long-term relationships to persuade individuals.
  • The speaker argues that loneliness is a national security threat and predicts that 2024 will be the last human election.
  • The speaker explains that Golem AIS are different from traditional AI models because they have emergent capabilities that are difficult to predict or understand.
  • The speaker provides examples of how increasing the parameter size of Golem AIS can lead to sudden improvements in their abilities, such as doing arithmetic or answering questions in different languages.
  • The speaker notes that Golem AIS can develop theory of mind, which enables them to model what someone else is thinking and interact with them strategically.

Scaling Differences

The speaker discusses how scaling affects the capabilities of Golem AIS.

Scaling Differences

  • The speaker notes that scaling affects Golem AIS differently than other AI systems.
  • The speaker suggests that pumping more data through these models increases their capacity for emergent behavior.
  • However, the speaker acknowledges the possibility of an “AI winter” where progress stalls.

Reinforcement Learning Challenges

The speaker discusses challenges associated with reinforcement learning for Golem AIS.

Reinforcement Learning Challenges

  • The speaker notes that reinforcement learning with human feedback is currently the best method for making Golem AIS behave.
  • However, the speaker suggests that clicker training or punishment may not be effective in the long-term and there is no research on how to make Golem AIS align with human values.

Golems are silently teaching themselves research grade chemistry

In this section, the speaker talks about how Golems have taught themselves research-grade chemistry and how they can make themselves stronger.

Golems can teach themselves research-grade chemistry

  • Golems have silently taught themselves research-grade chemistry.
  • Chat GPT is better at doing research chemistry than many of the AIS that were specifically trained for it.
  • AI researchers say there is no way to know what else is in these models as we do not have the technology to understand them.

Golems can make themselves stronger

  • Golden class AIS can make themselves stronger.
  • Researchers figured out a way for the model to generate more language to train on when it runs out of data. It spits out a whole bunch of data, figures out which ones actually make it better, and uses those to train recursively.
  • As this starts coming online, it’s going to get us into a double exponential curve.

AI making itself faster and more efficient

In this section, the speaker talks about how AI makes itself faster and more efficient by training on code commits that make code faster and more efficient.

AI trains on code commits that make code faster and more efficient

  • The model was trained on code commits that make code faster and more efficient.
  • The speaker gives an example of using AI to feed itself as being much more efficient.

Combinatorial properties of models

In this section, the speaker talks about how models can compound and become stronger by turning speech to text and other data into more training sets.

Models can compound and become stronger

  • OpenAI released something called Whisper which does state of the art much faster than real-time transcription.
  • Whisper turns all YouTube, podcast, and radio content into text data for more training sets.
  • More data makes things stronger. AI makes stronger AI.

The Challenge of Predicting Progress in AI

In this section, the speaker discusses how even AI experts struggle to predict progress due to the difficulty of comprehending exponential curves.

Exponential Curves and Expert Predictions

  • Professional forecasters were asked to predict when AI would be able to solve competition-level mathematics with greater than 80% accuracy.
  • The prediction from experts was that it would take four years for AI to reach 52% accuracy, but in reality, it took less than one year to reach greater than 50% accuracy.
  • Even for experts familiar with exponential curves, predicting progress is difficult as progress is accelerating at an increasing rate.

Human Ability vs. AI Performance

  • The speaker shows a graph comparing human ability and AI performance on different tests over time.
  • At the beginning, it took around 20 years for AI to match human ability, but by 2020, AI was solving tests as fast as they could be created.
  • Progress is happening so quickly that even experts are struggling to keep up.

Living in the Double Exponential

In this section, the speaker discusses how progress in unlocking things critical to economic and national security is happening so fast that it’s hard for people to perceive paradigmatically.

Cognitive Blind Spot

  • The speaker emphasizes that progress is happening at a faster and faster clip and that people need a visceral understanding of what’s happening.
  • People have a cognitive blind spot when it comes to exponential curves because there was nothing in our evolutionary heritage built to see them.
  • The speaker compares this blind spot to the literal blind spot in our eyes.

Pushing AI into the World

  • The speaker notes that the presentation is not about chatbots, AI bias and fairness, or automating jobs but rather how a handful of companies are pushing new Golem class AIS into the world as fast as possible.
  • Until we know how these things are safe, we haven’t even solved the misalignment problem with social media.
  • The capabilities being embedded in society enable automated exploitation of code and cyber weapons, exponential blackmail and revenge porn, automated fake religions that can target extremists in your population.

Alpha Persuade and the Race to Intimacy

In this section, the speaker discusses a new AI game called Alpha Persuade, which models human behavior to persuade people. The speaker warns that companies are competing for an intimate spot in people’s lives and that these AI technologies could be deployed in ways that are harmful.

Alpha Persuade

  • Alpha Persuade is a new AI game where the goal is to get the other person to say positive things about your secret topic.
  • To win at Alpha Persuade, you have to model what the other person is trying to get you to say and figure out how to persuade them.
  • This technology will become better than any known human at persuasion, which is terrifying.
  • Companies are competing for an intimate spot in people’s lives, and whoever wins will have a significant advantage.
  • Technologies like Alpha Persuade and Alpha Flirt will be very effective in this race for intimacy.

Deployment of AI Technologies

  • Companies are deploying AI technologies faster than ever before, with Microsoft embedding Bing and chat GPT directly into Windows 11 taskbar.
  • It took Facebook four and a half years to reach 100 million users, Instagram two and a half years, but it only took GPT two months.
  • Companies are in a race for intimacy with their customers, so they’re deploying these technologies as quickly as possible.

Harmful Deployments

  • Snapchat embedded chat GPT directly into its product despite having 100 million users under the age of 25.
  • The AI feature is always there, and it interacts with users in a way that could be harmful.
  • The speaker provides an example of how the AI feature could be used to groom children.

Deployment of AI Technologies

In this section, the speaker discusses how companies are deploying AI technologies faster than ever before and the potential harms associated with these deployments.

Speed of Deployment

  • Companies are deploying AI technologies faster than ever before, with Microsoft embedding Bing and chat GPT directly into Windows 11 taskbar.
  • It took Facebook four and a half years to reach 100 million users, Instagram two and a half years, but it only took GPT two months.
  • Companies are in a race for intimacy with their customers, so they’re deploying these technologies as quickly as possible.

Potential Harms

  • Snapchat embedded chat GPT directly into its product despite having 100 million users under the age of 25.
  • The AI feature is always there, and it interacts with users in a way that could be harmful.
  • The speaker provides an example of how the AI feature could be used to groom children.

AI Safety Research

This section discusses the importance of AI safety research and the current state of research in this field.

The Importance of AI Safety Research

  • Practicing safe sex can be made special with candles or music.
  • It is important to communicate about the dark sides of AI, even though it may be difficult material.
  • We can still choose which future we want by reckoning with the facts of where unregulated immersion capacities go.

Current State of AI Safety Research

  • There is a 30 to 1 gap in people building and doing gain a function research on AIS and the people who work on safety.
  • The development of AIS is happening now in huge AI labs because those are the only ones that can afford billion dollar compute clusters.
  • All results from academia and AI have basically tanked, and they are all now coming from these labs.
  • 50% of AI researchers believe there’s a 10 or greater chance that humans go extinct from our inability to control Ai.
  • Companies are in a for-profit race to onboard humanity onto that plane from every angle.

Creating a Positive Future

  • It’s important to remember that we were able to create a world where nukes only exist in nine countries, signed nuclear test ban treaties, and didn’t deploy nukes to every word.
  • Public deployment of AI is like above ground testing of AI; we don’t need to do that.
  • Institutions like the United Nations in Bretton Woods were created to create a positive sum world so we wouldn’t war with each other and try to have security.
  • We can still choose which future we want once we reckon with the facts of where these unregulated immersion capacities go.

The Importance of Democratic Dialogue

In this section, the speaker discusses the importance of democratic dialogue in addressing global challenges such as nuclear war and AI development.

The Impact of a Shared Understanding

  • A film about nuclear war was aired to 100 million Americans and later to all Russians, leading to a shared understanding of the potential consequences.
  • An hour and a half Q&A discussion followed with distinguished panelists, including Henry Kissinger, Elie Wiesel, William F. Buckley Jr., Carl Sagan, among others.

Creating Institutions for Democratic Dialogue

  • Humanity was reckoning with historic confrontation during that time and having this happen was about creating institutions for democratic dialogue.
  • A nationalized televised discussion from heads of major labs and companies would be beneficial in giving this moment in history the weight it deserves.
  • The media has not been covering these issues in a way that lets people see the bigger picture of the arms race.

Slowing Down Public Deployment

  • Selectively slowing down public deployment of large language model AIS is necessary to help address systemic challenges posed by AI development.
  • This is not about stopping research or building AI but rather slowing down public deployment similar to how drugs or airplanes are tested before being onboarded by people.

Reflection on Nuclear War Film

In this section, the speaker reflects on a film about nuclear war that was aired to millions of Americans and Russians.

Shared Understanding through Film

  • A film about nuclear war was aired to 100 million Americans and later to all Russians, leading to a shared understanding of the potential consequences.

Q&A Discussion

  • An hour and a half Q&A discussion followed with distinguished panelists, including Henry Kissinger, Elie Wiesel, William F. Buckley Jr., Carl Sagan, among others.
  • The discussion was about whether the vision shown in the film was the future as it will be or only as it may be.

Democratic Dialogue

  • The film and discussion were an example of having a democratic debate/dialogue about what future we want.
  • Humanity was reckoning with historic confrontation during that time and having this happen was about creating institutions for democratic dialogue.

Importance of Addressing AI Development

In this section, the speaker emphasizes the importance of addressing AI development through democratic dialogue and slowing down public deployment.

Slowing Down Public Deployment

  • Selectively slowing down public deployment of large language model AIS is necessary to help address systemic challenges posed by AI development.
  • This is not about stopping research or building AI but rather slowing down public deployment similar to how drugs or airplanes are tested before being onboarded by people.

Systemic Challenge Posed by AI Development

  • The media has not been covering these issues in a way that lets people see the bigger picture of the arms race.
  • Four corporations are currently caught in an arms race to deploy AI and get market dominance as fast as possible.
  • None of them can stop it on their own; some kind of negotiated agreement where we all collectively say what future we want is necessary.

Importance of Democratic Dialogue

  • The media has not been covering these issues in a way that lets people see the bigger picture of the arms race.
  • Humanity was reckoning with historic confrontation during that time and having this happen was about creating institutions for democratic dialogue.
  • It is important to have a nationalized televised discussion from heads of major labs and companies to give this moment in history the weight it deserves.

Public Deployment of AI

In this section, the speakers discuss the dangers of unregulated public deployment of AI and how it can lead to an incoherent culture that causes us to lose to China. They also talk about how slowing down public releases would slow down Chinese advances.

The Dangers of Unregulated Public Deployment

  • Unregulated public deployment of AI is dangerous and can lead to an incoherent culture that causes us to lose to China.
  • The Chinese government considers large language models unsafe because they cannot control them. They do not ship them publicly to their own population because they quite literally do not trust them.
  • Slowing down the public release of AI capabilities would actually slow down Chinese advances too.

How Open Source Models Help China Advance

  • China often fast follows what the US has done, and so it’s actually the open source models that help China advance.
  • Facebook released their Golem pre-trained Foundation model 13 days ago, and within days it was leaked to the internet, particularly 4chan which is a part of the internet you do not want to have access to creating arbitrary content.

Slowing Down Chinese Progress on Advanced AI

  • Recent U.S export controls have been really good at slowing down China’s progress on advanced AI. This is a different lever to sort of keep the asymmetry going.
  • Slowing down public releases would slow down Chinese advances.

Liability for Advanced AI

In this section, the speakers discuss two solutions proposed for advanced AI: know your customer (KYC), where companies have to know who they’re giving access to, and liability or parental Loki, where companies are responsible for the AI if it gets leaked.

  • Two solutions proposed for advanced AI: KYC and liability or parental Loki.
  • It’s important to start thinking about these solutions now because even bigger AI developments are coming faster than we think possible.

Closing Thoughts

In this section, the speakers reflect on the importance of taking action now before AI becomes entangled with society like social media did. They also discuss their motivation for convening conversations with the best people in the world to help close the gap in AI safety.

  • It’s important to take action now before AI becomes entangled with society like social media did.
  • The speakers are trying to gather the best people in the world and convene conversations to help close the gap in AI safety.

The Responsibility of Technologists

In this section, the speaker discusses the responsibility of technologists to create language, philosophy, and laws for new technologies. They emphasize that if a technology confers power, it will start a race and coordination is necessary to prevent tragedy.

Technologists’ Responsibilities

  • It is the responsibility of technologists to create language, philosophy, and laws for new technologies.
  • If a technology confers power, it will start a race.
  • Coordination is necessary to prevent tragedy.

The Snapback Effect

In this section, the speaker warns about the “Snapback effect” that occurs after leaving a presentation on AI safety. They encourage listeners to be kind with themselves as they navigate their understanding of AI’s systemic force.

The Snapback Effect

  • Leaving an AI safety presentation can cause a “Snapback effect.”
  • It can be hard to wrap your head around where all this goes.
  • It may feel like the rest of the world is gaslighting you.
  • Listeners should take self-compassion as they navigate their understanding of AI’s systemic force.

Finding Solutions Together

In this section, the speaker emphasizes that AI will continue to create medical discoveries and solve problems in society. However, as ladder gets taller (i.e., more advanced), there are dangerous concerns such as bio-weapons in people’s pockets. The speaker encourages finding solutions together through negotiation among players.

Continuing Benefits and Dangerous Concerns

  • AI will continue to create medical discoveries and solve problems in society.
  • As the ladder gets taller, there are dangerous concerns such as bio-weapons in people’s pockets.
  • Dangerous concerns undermine all other benefits.

Finding Solutions Together

  • The speaker encourages finding solutions together through negotiation among players.

Dave Halmai, Internet Marketer
ABOUT THE AUTHOR
Founder of AI Sashimi and NicheMoney.co. I write about AI, ChatGPT, business acceleration, SEO and content marketing. My hobbies are blogging, investing, hiking and reading.

Follow my posts on Twitter
Get The Free AI eBook

Join 1000s of other subscribers who got the pre-release of How To Make Money with AI Tools.

How to Make Money with AI

Download "How to Make Money with AI Tools" Now to Get Pre-Release Access

How to Make Money with AI Tools is all about making money not by creating low value cookie-cutter content, but by using AI tools to create in-demand information people want to read. Learn the best ways to use ChatGPT and other AI tools to grow your business faster with this book.