As time goes on, more and more senior figures in AI are becoming concerned about the pace of deployments and the potential of the technology. Has the same happened in other rapidly scaling industries?
Something I'm excited about with all the synthetic data talk is *detecting LLM generated text* as being a huge capability for organizations training. This then trickles down into potentially huge social impacts by lowering the prevalence of deepfakes.
– Utterance by the android David while dropping alien genetic material into the drink of a smug (human) scientist. Prometheus film (2012)
AI has been in the global social discourse since decades, but only recently is it gaining currency in the West thanks to the –– nearly entirely private –– R&D shift towards “a previously fringe approach (neural networks) [that] has become mainstream and highly” lucrative [1]. In its recent paper “Natural Selection Favors AI over Humans,” the nonprofit Center for AI Safety states "at some point … AIs … could outcompete humans, and be what survives” [2], because “… AIs will likely be selfish” and smarter than humans; thus posing “catastrophic risk” to humanity.
The first indicator of the seriousness of this risk is that authoritative AI industry insiders are warning the world that AI could pose an existential threat. Second, with the exception of the USA and China, governments are irreversibly losing to private companies the race to procure the computing and human resources needed to research, develop and roll out safe and reliable AI technologies [3]. Third, the most authoritative industry insiders are unaware of the technology’s full range of capabilities. Fourth, despite this, the AI industry is developing exponentially within a global regulatory void. Lastly, if AI could redefine society and even dominate human civilisation, philosophy reemerges as fundamental to political science and society.
*If the rise of transformative AI is far more political than technological, then Policy is the key role*
Truly enforceable AI safety for humans requires political awareness as well as both political and societal consensus; which would ensue if sovereign states would deem AI a top national security priority globally. This Theory of Change finds support in your claim in this blog that now is “more a political moment than a technological one” [4], because of the unprecedented societal implications of having this hugely transformative, lucrative and unregulated technology mostly in the hands of private companies [5].
What seems unprecedented here, as you have observed previously, is that the voices of concern from senior AI insiders are not only alerting about the rapid scaling of the industry, but that this is taking place with a weapons-grade technology amidst a remarkably non-existent global regulatory framework.
Perhaps the upcoming "EU AI Act" [6] could start improving that trend somewhat. [The EU Parliament adopted its negotiating position on the AI Act and the next steps, expected to begin immediately, are talks with the Member States in the EU Council to reach an agreement on the final form of the law by the end of 2023, during the Spanish rotating Presidency of the EU Council.]
Starting from the premise “AI safety is highly neglected” [7], two recommendations seem immediately fit to support ongoing efforts by some private companies and nonprofits to research, document and divulge key evidence about this fact; and leverage growing consensus in the industry about concerns over AI, while extending and globalising the said consensus to enable permanent human-centric and enforceable AI safety.
The first recommendation is to craft, with private and nonprofit leadership, a global coalition for safe AI with a whole of society / bottom-up, and a whole of nations / top-down approach; engaging key persons and organisations in a Global AI awareness network from which to source knowledge and ideas for normative rules to ensure that ethics and integrity drive AI always. Secondly, to work with governments for a robust regime of global safeguards based on experiences in nuclear, counter-terrorism, and drug control. Perhaps this is already being done in some form; many people are thinking about this and, logically, many will come to similar conclusions about possible solutions.
*Love Thy Human Cognition*
Never subordinating the human cognition to a non-human conscience seems a prudent option to keep humanity safe from the risk of losing control to conscious machines. The reemergence of philosophy as critical knowledge to mitigate the potentiality of this existential risk, attests to the perennial value of ancient human myths and archetypes that still render the human experience intelligible, and thus, more predictable.
Nothing in this world is more powerful than love. Love for Humanity –– for us the chiefest of all creations in the Universe –– should guide AI development always, within a regulatory framework with coercion and safety standards drawn from other fields essential to national security.
Something I'm excited about with all the synthetic data talk is *detecting LLM generated text* as being a huge capability for organizations training. This then trickles down into potentially huge social impacts by lowering the prevalence of deepfakes.
“Big things have small beginnings”
– Utterance by the android David while dropping alien genetic material into the drink of a smug (human) scientist. Prometheus film (2012)
AI has been in the global social discourse since decades, but only recently is it gaining currency in the West thanks to the –– nearly entirely private –– R&D shift towards “a previously fringe approach (neural networks) [that] has become mainstream and highly” lucrative [1]. In its recent paper “Natural Selection Favors AI over Humans,” the nonprofit Center for AI Safety states "at some point … AIs … could outcompete humans, and be what survives” [2], because “… AIs will likely be selfish” and smarter than humans; thus posing “catastrophic risk” to humanity.
The first indicator of the seriousness of this risk is that authoritative AI industry insiders are warning the world that AI could pose an existential threat. Second, with the exception of the USA and China, governments are irreversibly losing to private companies the race to procure the computing and human resources needed to research, develop and roll out safe and reliable AI technologies [3]. Third, the most authoritative industry insiders are unaware of the technology’s full range of capabilities. Fourth, despite this, the AI industry is developing exponentially within a global regulatory void. Lastly, if AI could redefine society and even dominate human civilisation, philosophy reemerges as fundamental to political science and society.
*If the rise of transformative AI is far more political than technological, then Policy is the key role*
Truly enforceable AI safety for humans requires political awareness as well as both political and societal consensus; which would ensue if sovereign states would deem AI a top national security priority globally. This Theory of Change finds support in your claim in this blog that now is “more a political moment than a technological one” [4], because of the unprecedented societal implications of having this hugely transformative, lucrative and unregulated technology mostly in the hands of private companies [5].
What seems unprecedented here, as you have observed previously, is that the voices of concern from senior AI insiders are not only alerting about the rapid scaling of the industry, but that this is taking place with a weapons-grade technology amidst a remarkably non-existent global regulatory framework.
Perhaps the upcoming "EU AI Act" [6] could start improving that trend somewhat. [The EU Parliament adopted its negotiating position on the AI Act and the next steps, expected to begin immediately, are talks with the Member States in the EU Council to reach an agreement on the final form of the law by the end of 2023, during the Spanish rotating Presidency of the EU Council.]
Starting from the premise “AI safety is highly neglected” [7], two recommendations seem immediately fit to support ongoing efforts by some private companies and nonprofits to research, document and divulge key evidence about this fact; and leverage growing consensus in the industry about concerns over AI, while extending and globalising the said consensus to enable permanent human-centric and enforceable AI safety.
The first recommendation is to craft, with private and nonprofit leadership, a global coalition for safe AI with a whole of society / bottom-up, and a whole of nations / top-down approach; engaging key persons and organisations in a Global AI awareness network from which to source knowledge and ideas for normative rules to ensure that ethics and integrity drive AI always. Secondly, to work with governments for a robust regime of global safeguards based on experiences in nuclear, counter-terrorism, and drug control. Perhaps this is already being done in some form; many people are thinking about this and, logically, many will come to similar conclusions about possible solutions.
*Love Thy Human Cognition*
Never subordinating the human cognition to a non-human conscience seems a prudent option to keep humanity safe from the risk of losing control to conscious machines. The reemergence of philosophy as critical knowledge to mitigate the potentiality of this existential risk, attests to the perennial value of ancient human myths and archetypes that still render the human experience intelligible, and thus, more predictable.
Nothing in this world is more powerful than love. Love for Humanity –– for us the chiefest of all creations in the Universe –– should guide AI development always, within a regulatory framework with coercion and safety standards drawn from other fields essential to national security.
--- --- ---
Endnotes:
[1] “Securing Liberal Democratic Control of AGI through UK Leadership”, James W. Phillips, 14 Mar 2023. https://jameswphillips.substack.com/p/securing-liberal-democratic-control
[2] https://arxiv.org/abs/2303.16200
[3] Same as [1]
[4] https://importai.substack.com/p/import-ai-321-open-source-gpt3-giving?utm_source=profile&utm_medium=reader2
[5] Ibid.
[6] https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence?&at_campaign=20226-Digital&at_medium=Google_Ads&at_platform=Search&at_creation=RSA&at_goal=TR_G&at_advertiser=Webcomm&at_audience=%7bkeyword%7d&at_topic=Artificial_intelligence_Act&at_location=AT&gclid=EAIaIQobChMI2faxj4ji_wIVQ-TmCh0McACuEAAYASAAEgLzjPD_BwE
[7] https://www.safe.ai/about