Ive got paramilitary troops busting down every private residents door, pulling all the residents outside, and then stealing their shit. With the excuse there might be dissedents in the area. Im not worried about agi.
deleted by creator
I certainly agree that makes the scenario more concerning. But I worry that it also increases the “surface area of disagreement”. Some people might reject the metaphor on the grounds that they think—say—that AI will require such enormous computational resources and there are physical limits on how quickly more compute can be created that AI can’t “reproduce”.
Then they forget about the collapse in the computational requirements of solved problems.
The bootstrap of the first C compiler took years, the second took days and ran nearly an order of magnitude faster and used less memory and produced better results. Heck a hobbyist wrote a c compiler in assembly from zero in less than 24 hours.
It took decades before the first OoO cpu was made, then within a couple of years every high performance CPU was OoO. Even hobbyists design OoO cpus these days.
Once you know how to do something, the world of optimizations, enhancements and the rest explode. The same also applies to general intelligence. It is not a 300 IQ but so far beyond that it would be like as if we were mold and tried to comprehend how humans produced anti-fungals; it is game over.
Ah, so the argument is more general than “reproduction” through running different physical copies, but also includes the AI self-improving? This again seems plausible to me, but still seems like something not everyone would agree with. It’s possible, for example, that the “300 IQ AI” only appears at the end of some long process of recursive self-improvement, at which stage physical limits mean it can’t get much better without new hardware requiring some kind of human intervention.
I guess my goal is not to lay out the most likely scenario for AI-risk, but rather the scenario that requires the fewest assumptions, that’s the hardest to dispute?
deleted by creator
I much prefer your simple framing of the AI-risk question, but the question posed as non-zero vs greater than zero risk is too black and white for me. There is always a non-zero risk of anything happening. To me the question is:
- How big is the AI-risk and over what timescale?
- What tools do we have mitigate it? At what cost? And how likely are we to mitigate these risks?
- How does AI-risk compare to other pressing risks? And what opportunities for mitigating other risks does AI present? What is the total risk vs reward?
For instance, nuclear breeder reactors represent a major threat of proliferation of nuclear weapons and assorted risks of nuclear war. At the same time, they provide a massive source of energy, allowing use to mitigate global warming risks. What is the net risk balance offered by breeder reactors?
“If a race of aliens with an IQ of 300 came to Earth, that would definitely be fine.”
It wouldn’t definitely be fine, but would probably be fine for the two hundred years, with risks increasing as the population of aliens approaches ~100,000. In the short term, the aliens are likely to be helpful with a number of more immediate risks. In the long term, on a 200 year time scale, humans are likely to modify themselves and the aliens to be roughly equivalent in capability.
Is a humanity better that walls itself off from life more intelligent than us. Will this make humanity stronger?
I agree with you! There are a lot of things that present non-zero existential risk. I think that my argument is fine as an intellectual exercise, but if you want to use it to advocate for particular policies then you need to make a comparative risk vs. reward assessment just as you say.
Personally, I think the risk is quite large, and enough to justify a significant expenditure of resources. (Although I’m not quite sure how to use those resources to reduce risk…) But this definitely is not implied by the minimal argument.
