Ultimately if you had a situation in which you have a digital intelligence or digital life form, assuming that this life-form can evolve, so.
For example, it can copy itself into different data centers it can make copies of itself on two different computers, and it can change itself over time, then it’s going to evolve and so just like any other organism it’s going to develop regarding its characteristics its capabilities its moral and ethical guidelines.
What that means, is that ultimately you’re going to end up with another type of intelligence which may have its motives or its motivations, which may or may not align with exactly what we originally intended for that intelligence to do. Thus creates a new Type of Digital Species a New humanoid species to be exact.
To many people think of AI too much as a technology versus an as an alternate species See thus Short Story Dystopia new species of humans – A Chronocrypto Short Story and if you think about it once you have machine intelligence, it really is a different form of life even though it’s in digital form and that creates all sorts of ethical and moral conundrums both in terms of how should we be interacting with these species, especially if it’s intelligent and sentient.
What trail should we have concerning interacting with them but also eventually, what are the implications of different distribution and consolidation of power between humans and artificial intelligence? Consider this fact if we decide to continue to create AI’s whats not to stop them from evolving? Please also Listen to this AudioBook Narrated by @voraces