Good attempt at refutation - however you’ll notice that my analogy (humans and animals) is better because the AGIs will be a distinct species. Among humans we’ve developed a sophisticated notion of “human rights”. But you don’t make clear why they would have such a notion.
Do we have a notion of the rights of AI? No, and yet we trade with them. We employ AI for some tasks and other people for others. If an AI wants to complete a task (If an AI wants at all), it will face the constraint of opportunity cost. If it is optimizing then it will solve this constraint by specializing where it can produce the most value and then exchanging that for mutual benefit. There is no chance that AI will ignore us even if we are far inferior, we are useful. Will they subjugate us? I doubt it. That would require them to manifest thousands of expensive physical forms, and they would quickly find out that enslaved humans aren't very useful. I think that an optimizing AI would find it much more beneficial, even exciting, to engage in positive trade. If we're assuming some sort of internal experience for AI then expanding the PPF must feel like heaven to them lol.
I have lots of questions here. I think there is more to AI than its macroeconomic effects. Are you concerned about increasing inequality, and the effects it may have? What about the effects of an increasingly reductionist (brought about by viewing AI as human equivalent) worldview on human dignity? I think there is substantially more to this question than "comparative advantage good." Regardless, thanks for the blog post. Always an enjoyable read.
Plot twist with BCIs like Neuralink... if at some point the capabilities of AI are in (some of) our brains (but not others?) how do the economics play out?
Do we engage in free trade and mutual respect for rights with less capable creatures, or instead act as if they don't have many if any rights?
Highly industrialized and productive nations engage in free trade with far less capable, undeveloped, and impoverished countries.
Good attempt at refutation - however you’ll notice that my analogy (humans and animals) is better because the AGIs will be a distinct species. Among humans we’ve developed a sophisticated notion of “human rights”. But you don’t make clear why they would have such a notion.
Do we have a notion of the rights of AI? No, and yet we trade with them. We employ AI for some tasks and other people for others. If an AI wants to complete a task (If an AI wants at all), it will face the constraint of opportunity cost. If it is optimizing then it will solve this constraint by specializing where it can produce the most value and then exchanging that for mutual benefit. There is no chance that AI will ignore us even if we are far inferior, we are useful. Will they subjugate us? I doubt it. That would require them to manifest thousands of expensive physical forms, and they would quickly find out that enslaved humans aren't very useful. I think that an optimizing AI would find it much more beneficial, even exciting, to engage in positive trade. If we're assuming some sort of internal experience for AI then expanding the PPF must feel like heaven to them lol.
Speedrunning to be the first to like so the AI know not to terminate me.
I have lots of questions here. I think there is more to AI than its macroeconomic effects. Are you concerned about increasing inequality, and the effects it may have? What about the effects of an increasingly reductionist (brought about by viewing AI as human equivalent) worldview on human dignity? I think there is substantially more to this question than "comparative advantage good." Regardless, thanks for the blog post. Always an enjoyable read.
There are definitely lots of other implications of AI that I didn't cover in the post. I was only trying to make one point.
This is fair
Plot twist with BCIs like Neuralink... if at some point the capabilities of AI are in (some of) our brains (but not others?) how do the economics play out?