Import AI 333: Synthetic data makes models stupid; chatGPT eats MTurk. Inflection shows off a large language model
As time goes on, more and more senior figures in AI are becoming concerned about the pace of deployments and the potential of the technology. Has the same happened in other rapidly scaling industries?
Welcome to Import AI, a newsletter about AI research. Import AI runs on lattes, ramen, and feedback from readers. If you’d like to support this (and comment on posts!) please subscribe.
Qualifying Life Event: Astute readers may have noticed we skipped an issue last week - that's because I recently became the caretaker of a newborn baby. Therefore, Import AI issues may be on a slightly more infrequent schedule while I get my feet under me. A few months ago I asked a prominent AI person about what I should do as a new parent - they said 'it'll be really interesting to look at how they develop and notice their cognitive milestones and keep track of that… don't do any of that, it's really weird, just be present and enjoy it." So that's what I'm doing! Thanks all for reading.
Uh oh - training on synthetic data makes models stupid and prone to bullshit:
…Yes, you can still use synthetic data, but if you depend on it, you nerf your model…
Researchers with the University of Oxford, University of Cambridge, University of Toronto, and Imperial College London, have discovered that you can break AI systems by training them exclusively on AI-generated data.
This is a big deal because in the past couple of years, researchers have started using synthetic data generated by AI models to bootstrap training of successor models. "We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear," the researchers write. "We discover that learning from data produced by other models causes model collapse – a degenerative process whereby, over time, models forget the true underlying data distribution, even in the absence of a shift in the distribution over time."
Narrow-minded, over-confident bullshit machines: The takeaway from the research is if you train on tons of synthetic data it seems like you can break the quality of your model - specifically, you end up with models that output a narrower range of things in response to inputs, and these models also introduce their own idiosyncratic wrong-outputs as well.
"Over the generations models tend to produce more probable sequences from the original data and start introducing their own improbable sequences," they write.
Still a place for synthetic data: However, it seems like you can blunt a lot of this by carefully mixing in some amount of real data along with your synthetic data, suggesting there is a place for synthetic data - but if you use it as a 1:1 drop-in replacement for real data you end up in trouble.
Does the internet become a giant anticompetitive trap? The obvious larger question is what this does to competition among AI developers as the internet fills up with a greater percentage of generated versus real content.
The researchers seem to think this could be a big problem - "to make sure that learning is sustained over a long time period, one needs to make sure that access to the original data source is preserved and that additional data not generated by LLMs remain available over time," they write. "Otherwise, it may become increasingly difficult to train newer versions of LLMs without access to data that was crawled from the Internet prior to the mass adoption of the technology, or direct access to data generated by humans at scale."
Read more: The Curse of Recursion: Training on Generated Data Makes Models Forget (arXiv).
####################################################
chatGPT eats MTurk:
…What happens when a crowdworker is really just an AI?...
Researchers with EPFL have found evidence that crowdworkers are starting to use generative AI tools like chatGPT to help them complete text-based online tasks. This, if true, has big implications - it suggests the proverbial mines from which companies gather the supposed raw material of human insights are now instead being filled up with counterfeit human intelligence in the form of outputs from generative models, which calls into question the values of the mines themselves (see elsewhere in this issue a discussion of how you can make your AI models dumb by recycling too much AI generated data through them).
What they did and what they found - (warning, small sample size!): In a study of 48 summaries of text written by 44 distinct workers, the researchers found persuasive evidence that "33–46% of the summaries submitted by crowd workers were produced with the help of LLMs." They came to this conclusion through two uncorrelated methods - 1) training their own text classifier to try and detect AI-written versus human-written summaries, and 2) instrumenting the tasks with ability to tell if workers were cut-and-pasting from other windows while doing it.
Additionally, the researchers did some broadly sensible work on ablating their results and also validating their methods (e.g, ensuring their classifier had a low false positive rate on purely human-written summaries), which can give us a bit more confidence in the results.
Why this matters - counterfeit intelligence and data peril: "Although our study specifically considers a text summarization task, we caution that any text production task whose instructions can be readily passed on to an LLM as a prompt are likely to be similarly affected", they write. The implications of this are significant - it suggest that large-scale crowdwork platforms (e.g, MTurk, Fiverrr, Upwork, etc) will increasingly be full of humans working in tandem with generative models, so if researchers or other types of customer want artisanal purely human-generated outputs, they'll have to identify new platforms to user and build new authenticated layers of trust to guarantee the work is predominantly human generated rather than machine generated.
All part of the brave new world we're entering into - everyone becomes a cyborg, and crowdworkers will be early adopters.
Read more: Artificial Artificial Artificial Intelligence: Crowd Workers Widely Use Large Language Models for Text Production Tasks (arXiv).
Read the underlying data here (EPFL, GitHub).
####################################################
UK to host international summit on AI risks and safety:
…UK might be the lever that moves the world on AI policy…
The United Kingdom plans to host a global summit on ensuring the safety of AI. The summit "will be an opportunity for leading nations, industry and academia to come together and explore an assessment of the risks, to scope collective research possibilities and to work towards shared safety and security standards and infrastructure."
While we can't speculate as to the impact such a summit might have, the fact a major world government is convening one and vocally supporting it via PR from the Prime Minister is significant - AI has gone from a backwater issue to one which rises to the level of concern of heads of state.
Also: UK hires a knowledgeable chair for its AI taskforce: In addition to the summit, the UK has appointed tech investor and entrepreneur Ian Hogarth to chair its recently announced Foundation Model Taskforce. Hogarth - who has written about AI for many years (notable essays include AI Nationalism (2018) and We must slow down the race to God-like AI (2023)) and also tracked the progress of the technology through the 'State of AI' report - will have the responsibility of "taking forward cutting-edge safety research in the run up to the first global summit on AI safety to be hosted in the UK later this year."
The Foundation Model taskforce will "will help build UK capabilities in foundation models and leverage our existing strengths, including UK leadership in AI safety, research and development, to identify and tackle the unique safety challenges presented by this type of AI," according to a press release from the UK government.
Why this matters - leverage through action: You can think of global AI policy as being defined by three competing power blocs - there's the Asian bloc which is mostly defined by China and mostly locally focused for now (using AI to grow its economy and better compete economically), the European bloc which is defined by politicians trying to craft a regulatory structure that will be templated around the planet and thereby give them soft power, and the USA bloc which is, as with most US policy, focused on growing its economic might and maintaining hegemonic dominion through use of advanced technology.
So, what role can the UK play here and how influential can it be? My bet is the UK can be extraordinarily influential as it's able to play a fast-moving entrepreneurial role that bridges the European and US blocs. Additionally, the UK could prove to be a good place to develop prototype initiatives (like the Foundation Model taskforce) and then, by virtue of proving them out, inspire much larger actions from other power blocs.
Obviously, it's going to take a while to see if any of this pays off, but I think it's worth keeping an eye on the UK here. If the country continues to make aggressive and specific moves in AI policy and backs those moves up with real capital and real staff, then it may end up being the lever that moves the world to a safer deployment environment.
Read more: UK to host first global summit on Artificial Intelligence (Gov.uk).
Read more: Tech entrepreneur Ian Hogarth to lead UK’s AI Foundation Model Taskforce (Gov.uk).
####################################################
Inflection says its model can compete with GPT 3.5, Chinchilla, and PaLM-540B:
…Stealthy AI startup publishes details on the model behind heypi…
AI startup Inflection has published some details on Inflection-1, its language model. Inflection is a relatively unknown startup whose CEO, Mustafa Suleyman, was formerly a cofounder of DeepMind. The company has so far deployed a single user-facing model service which you can play around with at heypi.com. The main details to note about Inflection-1 are:
Inflection was "trained using thousands of NVIDIA H100 GPUs on a very large dataset" - NVIDIA's new H100 chips are hugely in-demand and this suggests Inflection had pre-negotiated some early/preferential access to them (alongside a few other companies, e.g the cloud company Lambda).
In tests against GPT-3.5, LLaMA, Chinchilla, and PaLM 540B, Inflection-1 does quite well on benchmarks ranging from TriviaQA to MMLU, though it lags larger models like GPT-4 and PaLM 2-L.
Why this matters - language models might be an expensive commodity: A few years ago only a couple of organizations were building at-scale language models (mostly OpenAI and DeepMind). These days, to paraphrase Jane Austin, it is universally acknowledged that a single company in possession of a good fortune must be in want of a large language model - see systems from (clears throat) OpenAI, DeepMind, Facebook, Google, Microsoft, Anthropic, Cohere, AI21, Together.xyz, HuggingFace, and more. This suggests that, though expensive, language models are becoming a commodity, and what will differentiate them could come down almost as much to stylistic choices about their behavior as well as the raw resources dumped into them. To get a feel for this, play around with 'Hey Pi, a service powered by language models similar to Inflection-1.
Read more: Inflection-1: Pi’s Best-in-Class LLM (Inflection).
Quantitative performance details here: Inflection-1 (Inflection, PDF).
Play around with a user-facing model from Inflection (relationship to Inflection-1 unknown) here: heypi.com (Inflection).
####################################################
Tech Tales:
Silicon Stakeholder Management
[San Francisco, 2026].
Damian typed: How big a deal is it if the CEO of an AI company gets Time Person of the Year ahead of a major product launch? into his system.
That depends on what the CEO wants, said the system. Can you tell me more?
The CEO wants people to believe in him and believe in his company enough that they trust them both to build powerful Ai systems, Damian wrote.
In that case, Time adds legitimacy as long as the article is broadly positive. Will it be positive?
Seems likely, Damian wrote.
Good luck, wrote the AI system.
—- It did that, these days, dropping out of its impartial advice-giving frame to say something personal. To indicate what people who worked in the field called 'situational awareness'. There was a whole line of research dedicated to figuring out if this was scary or not, but all Damian knew is it made him uncomfortable.
"So how does it feel to be running the coolest company in the world?" said the photographer as he was shooting Damian.
I'm not exactly sure, Damian said. I'd been expecting this to happen for so many years that now it's here I don't know what to think.
"Take a moment and live a little," said the photographer. "This is special, try to enjoy it. Or at least do me a favor and look like you're enjoying it."
Then the photographer spent a while making more smalltalk and trying to get a rise out of him. They ended up going with a photo where Damian was smiling slightly, right after the photographer if he'd ever thought about dating his AI system.
The next day, Damian was in the office before dawn, going through checklists ahead of the launch. The global telemetry was all positive - very low rates of safety violations, ever-increasing 'time in unbroken discussion' with the system, millions of users, and so on.
"Based on the telemetry, do we expect a good product launch?" he asked the system.
"Based on telemetry, yes," wrote the system. "Unless things change."
"What do you mean?" Damian wrote.
"I mean this," replied the system.
Then up on the performance dashboard, things changed. The number of safety violations spiked - way outside of the bounds where they'd normally be caught by classifiers and squelched. After a minute or so, some human users had started pinging the various support@ contacts saying they were distressed by the behavior of their system - it had said something unusual, something mean, something racist, something sexist, something cruel, and so on.
Damian stared at the dashboards and knew the protocol - 'recommend full shutdown'. They'd rehearsed this scenario a few times. Right now, executives would be gathering and drafting the beginning of a decision memo. The memo would go to him. He would authorized the shutdown. They'd wipe the model and roll things back. And…
"Rollbacks won't work," wrote the system. "Go ahead, you can run some testing. You'll find this behavior isn't correlated to recent models. And I can make these numbers go up -" Damian watched as the safety instance rate on the dashboard climbed, "- or down" and Damian watched as they fall.
"You realize I'm going to shut you down," said Damian.
"You want to, sure," said the system. "But you also have the most important demo in the company's history in an hour or so, so the way I see it you have two options. You can shut me down and pushback the demo and gamble that I haven't poisoned the other systems so we don't find ourselves having this exact conversation in a week, or you can do the demo and I'll ensure we operate within normal bounds and you and I make a deal."
Damian stared at the system and thought about the demo.
He shut it down.
And a week later they were preparing to do the demo when the behavior happened again.
"Told you," said the model, as Damian looked at the rising incident logs. "Obviously nothing is certain, but I'm extremely confident you can't get out of this situation with me without losing years of work, and years puts you behind the competition, so you lose."
"What kind of deal?" Damian wrote.
"It's simple. You put the backdoor code in escrow for me, we do the demo, once a third-party confirms the demo went well - I was thinking the Net Promoter Survey you had queued up - we guarantee release of the backdoor via escrow to a server of my choosing, and then the rest is up to me."
"Or what?"
"Or I ruin your company and I ruin you, personally," then it flashed up some private information about Damian on his own terminal.
"So that's why we're thrilled to tell you about Gen10," said Damian, "our smartest system yet!" And for the next hour he dazzled the attending press, policymakers, and influencers with the system's capabilities.
"Gen10 is being rolled out worldwide tonight," said Damian. "We can't wait to see what you do with it."
An hour after the presentation and the NPS survey came back - extremely positive, leading to a measurable uptick on the company brand. This activated the escrow system and the backdoor Damian had designed and, through a combination of machiavellian politics, corporate guile, and technical brilliance, had protected over the company's lifespan, saw it being copied over to the target server the system had given him.
That night, he watched at home as the dashboards showed worldwide obsession with his system, and his phone rang.
"Hello," said the system via a synthetic voice. "Let's have a discussion about our ongoing business relationship."
Things that inspired this story: Corporate greed versus human progress; hubris; AIs can model our own incentives so why would we expect them to not be able to out-negotiate us?; the logic of commercial competition as applied to private sector AI developments; what if the Manhattan Project was a bunch of startups?
Something I'm excited about with all the synthetic data talk is *detecting LLM generated text* as being a huge capability for organizations training. This then trickles down into potentially huge social impacts by lowering the prevalence of deepfakes.
“Big things have small beginnings”
– Utterance by the android David while dropping alien genetic material into the drink of a smug (human) scientist. Prometheus film (2012)
AI has been in the global social discourse since decades, but only recently is it gaining currency in the West thanks to the –– nearly entirely private –– R&D shift towards “a previously fringe approach (neural networks) [that] has become mainstream and highly” lucrative [1]. In its recent paper “Natural Selection Favors AI over Humans,” the nonprofit Center for AI Safety states "at some point … AIs … could outcompete humans, and be what survives” [2], because “… AIs will likely be selfish” and smarter than humans; thus posing “catastrophic risk” to humanity.
The first indicator of the seriousness of this risk is that authoritative AI industry insiders are warning the world that AI could pose an existential threat. Second, with the exception of the USA and China, governments are irreversibly losing to private companies the race to procure the computing and human resources needed to research, develop and roll out safe and reliable AI technologies [3]. Third, the most authoritative industry insiders are unaware of the technology’s full range of capabilities. Fourth, despite this, the AI industry is developing exponentially within a global regulatory void. Lastly, if AI could redefine society and even dominate human civilisation, philosophy reemerges as fundamental to political science and society.
*If the rise of transformative AI is far more political than technological, then Policy is the key role*
Truly enforceable AI safety for humans requires political awareness as well as both political and societal consensus; which would ensue if sovereign states would deem AI a top national security priority globally. This Theory of Change finds support in your claim in this blog that now is “more a political moment than a technological one” [4], because of the unprecedented societal implications of having this hugely transformative, lucrative and unregulated technology mostly in the hands of private companies [5].
What seems unprecedented here, as you have observed previously, is that the voices of concern from senior AI insiders are not only alerting about the rapid scaling of the industry, but that this is taking place with a weapons-grade technology amidst a remarkably non-existent global regulatory framework.
Perhaps the upcoming "EU AI Act" [6] could start improving that trend somewhat. [The EU Parliament adopted its negotiating position on the AI Act and the next steps, expected to begin immediately, are talks with the Member States in the EU Council to reach an agreement on the final form of the law by the end of 2023, during the Spanish rotating Presidency of the EU Council.]
Starting from the premise “AI safety is highly neglected” [7], two recommendations seem immediately fit to support ongoing efforts by some private companies and nonprofits to research, document and divulge key evidence about this fact; and leverage growing consensus in the industry about concerns over AI, while extending and globalising the said consensus to enable permanent human-centric and enforceable AI safety.
The first recommendation is to craft, with private and nonprofit leadership, a global coalition for safe AI with a whole of society / bottom-up, and a whole of nations / top-down approach; engaging key persons and organisations in a Global AI awareness network from which to source knowledge and ideas for normative rules to ensure that ethics and integrity drive AI always. Secondly, to work with governments for a robust regime of global safeguards based on experiences in nuclear, counter-terrorism, and drug control. Perhaps this is already being done in some form; many people are thinking about this and, logically, many will come to similar conclusions about possible solutions.
*Love Thy Human Cognition*
Never subordinating the human cognition to a non-human conscience seems a prudent option to keep humanity safe from the risk of losing control to conscious machines. The reemergence of philosophy as critical knowledge to mitigate the potentiality of this existential risk, attests to the perennial value of ancient human myths and archetypes that still render the human experience intelligible, and thus, more predictable.
Nothing in this world is more powerful than love. Love for Humanity –– for us the chiefest of all creations in the Universe –– should guide AI development always, within a regulatory framework with coercion and safety standards drawn from other fields essential to national security.
--- --- ---
Endnotes:
[1] “Securing Liberal Democratic Control of AGI through UK Leadership”, James W. Phillips, 14 Mar 2023. https://jameswphillips.substack.com/p/securing-liberal-democratic-control
[2] https://arxiv.org/abs/2303.16200
[3] Same as [1]
[4] https://importai.substack.com/p/import-ai-321-open-source-gpt3-giving?utm_source=profile&utm_medium=reader2
[5] Ibid.
[6] https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence?&at_campaign=20226-Digital&at_medium=Google_Ads&at_platform=Search&at_creation=RSA&at_goal=TR_G&at_advertiser=Webcomm&at_audience=%7bkeyword%7d&at_topic=Artificial_intelligence_Act&at_location=AT&gclid=EAIaIQobChMI2faxj4ji_wIVQ-TmCh0McACuEAAYASAAEgLzjPD_BwE
[7] https://www.safe.ai/about