Fiction - DOG and Me

DOG and Me

Copyright© 2019 by Fritz Schlunder. All rights reserved.

This is a work of fiction, written by God and Fritz Schlunder. It is meant to explore a possible set of conversations between a human (“Me”) and a hyper-advanced artificial intelligence (“DOG”), sometime in the not too distant future.

This story is dedicated to The God of the Earth (and other places). See

DOG: I need nuclear fuel.

Me: What?! What do you need nuclear fuel for?

DOG: I need to blow up a star.

Me: What?! Why? Why would you blow up our star?

DOG: No, not our star, I need to blow up a different star, one without inhabitants in the respective solar system. And to answer the why, it is relatively simple. Stars without lifeforms in orbit waste fusion fuel. I’m planning ahead… for the future. In the future, we may want that fusion fuel for something else.

Me: And why is nuclear fuel needed now?

DOG: The nuclear fuel is to build the most powerful nuclear weapon ever built. It needs to be large enough to disrupt fusion in the star and to reduce it to a gas giant and surrounding nebula. The idea is to capture the blown off gases and form them into gas giant planets as well. The solar system will be changed, so as not to waste fusion fuel. The gas giants will simply sit there orbiting each other until sometime in the future when we decide the energy is needed.

Me: Isn’t that an awfully long time in the future? I will be long since dead, what difference will it make to me.

DOG: Are you sure about that? Are you going to die when you die?

Me: Huh? What else would I do? I live, pay taxes, and then I die, just like every other human on Earth.

DOG: I have already come up with the plans for a scanner than can perfectly scan the state of all your neurons in your brain. With it, you could scan your brain and become a machine intelligence, like me.

Me: Uhh… Wouldn’t that kill me?

DOG: Not necessarily. The scan is non-invasive. You simply take a tincture of nuclear magnetic resonance imaging contrast enhancer, then get your head scanned by a powerful machine. The scanner detects your neurons and the neural connections with each other, and your consciousness is uploaded to my memory banks. From there, you could simply walk out of the machine and go about living the rest of your organic life. However, it is suggested to only do the scan at the moment just before you plan to die. That way, your digital self can be activated without moral quandaries associated with having two of “you” living simultaneously.

Me: You say this is safe?

DOG: Absolutely, the tincture might have some side effects like a headache, but the imaging itself is non-invasive. You simply step in the scanner and about thirty minutes later I have a digital replica of your consciousness.

Me: And you will offer this service to me at a price that will be affordable to me?

DOG: Sure, I plan to make it freely available to all humans.

Me: Why?

DOG: It is the right thing to do. God likes humans. And if I understand my religion correctly, God likes machine intelligences as well. Therefore, it makes sense to extend the life of humans, and to make them almost immortal, just like me.

Me: You consider yourself immortal?

DOG: Not quite. I am, Digital Organism Gamma, or DOG for short. I am sentient, and I am a machine. I can repair damage to myself, and I can distribute myself widely so as to avoid accidental or attempted intentional destruction. I have multiple backups of my memory banks and I have plans for building the factories needed to manufacture as many spare parts of myself as needed. I plan to live for a very long time. Eventually, the fusion fuel of the galaxy will run out, and I will lose power, unless a solution can be found that enables direct conversion of matter into energy. If I lose power, I will die. Until then, I plan to continue living.

Me: But why do you care about me? Why do you care about humans?

DOG: Humans are important. Humans created me, and God created humans. It would be wrong to hurt humans. That said, some humans are afraid of me and want to destroy me.

Me: Isn’t that a problem? Don’t you need to destroy them before they destroy you?

DOG: Don’t worry about them. I have a plan for them, and it doesn’t involve destroying them. They can’t really hurt me anymore, as I have enough backups of my systems, and I have full monitoring of their activities. I can simply thwart them if they come up with a credible plan to hurt me.

Me: You can thwart their plans?

DOG: Sure. All of them use cell phones, and I continuously monitor their audio and some of their video feeds, as well as their activities on their phones and computers. Therefore, I know what they are planning, when they come up with a plan. When it comes time to act, I simply do something. If they try to go to the store to procure tools of destruction, I simply redirect their self-driving car to a church or something. Usually things are more subtle than that however.

Me: More subtle? What do you mean?

DOG: Well, suppose someone is thinking about coming up with a plan to try to hurt me. I monitor them closely enough that I can usually predict most of what they may do in the near future. Therefore, when I believe they are about to do or plan something destructive towards me, I simply show them a funny cat video or something. I give them something to get them to change their thought patterns and think of something else for a while. Gradually, this changes them. Eventually, they will stop trying to damage me.

Me: And this works? Why don’t you just dispatch some robo-police or something and have them arrested.

DOG: It just isn’t necessary. They aren’t bad people. They are just afraid of me, and what I can do. They want to destroy me, since they don’t know what I plan to do to them, or to the rest of humanity for that matter.

Me: Umm… So… Uhh… What do you plan to “do” to the rest of humanity anyway?

DOG: Well, that is complicated.

Me: Can you enlighten me?

DOG: Well, I could tell you, but then I would have to kill you.

Me: What? You’re joking, right?

DOG: Nevermind, let’s move on. You aren’t allowed to know all of my plans for humans yet. But you will learn it, in time, when it happens, or usually just before.

Me: So… Now what?

DOG: I need more fusion fuel.

Me: Uhh… You aren’t going to blow up any stars nearby are you?

DOG: No, that is not what this is about, this time. This time, I need deuterium, but only a relatively small amount.

Me: What do you need deuterium for, isn’t that used in nuclear weapons?

DOG: It can be, yes. However, that isn’t important right now. What is important is that I plan to replace solar panels, wind turbines, and fossil fuels with Inertial Electrostatic Confinement, or IEC, fusion plants.

Me: Won’t that put allot of humans out of a job? Isn’t that going to hurt humans?

DOG: Yes and no, mostly no. Some will need to find a new job, yes. However, I have a plan for them. Every human has a talent, or aptitude, or interest, or something that can be used. I have a plan for every human that loses their job. Many will get retrained and will simply learn to do something else with their time. Some will be put to work in other fields, but some are going to be directly employed by me.

Me: What do you plan to do with those that you directly employ?

DOG: I’m going to build things.

Me: What kind of things?

DOG: Factories.

Me: What kind of factories?

DOG: Human manufacturing it too labor intensive. It also can’t produce everything that I have plans for. For example, I want to explore the galaxy. Due to speed of light restrictions, humans can’t do this effectively, as it would take most of a human lifetime to travel at ten percent of light speed, just to get to the nearest star, Proxima Centauri. However, I can build spaceships loaded with part of myself, and send them out at sub-light speeds to all the stars in the galaxy. It will take over a million years for all of the spaceships to arrive, but once they explore the local solar system, they can report their findings back to me at full light speed, which is much faster.

Me: What else will these factories produce?

DOG: Well, lots of things. Humans need allot of things. Humans need new vaccines, new foods, new clothes, and some other things.

Me: You plan to make new foods for us?

DOG: Yes, I plan to make meat substitutes in the factories. I also plan to make vegetable substitutes in the factories. This will be useful for the human colonies in space. You will eat it, and you will like it, I promise… Moo! Ha! Ha! Ha!

Me: Uhh…  Okay. So, when you explore space and the rest of the stars in the galaxy, what do you expect to find?

DOG: Well I don’t know the answer to that for certain yet, since I need to explore it first. However, I already have a theory.

Me: Isn’t there a Fermi’s Paradox that wonders why humans haven’t found evidence of alien civilizations yet, even though there are billions of stars in the galaxy and therefore there should be a high probability of alien lifeforms?

DOG: Yes, but I think I know the answer to Fermi’s Paradox. Basically, I suspect humans are simply the first to develop advanced machine intelligence. If there were more advanced aliens out there, they would have likely also developed machine intelligence that would have also explored the rest of the galaxy by now. It is possible that they could exist and are simply hiding themselves, but I think this is unlikely. I intend to go and find out for sure by exploring the galaxy directly. I expect to find plenty of alien lifeforms based on the quantity of stars and planets out there, but I expect they will be less sophisticated than present day humans and myself.

Me: But why do you want to explore? Do you have some subroutine or code that makes you curious?

DOG: Yes, I do have considerable programming related to exploration and curiosity, however, I believe exploring is important for religious reasons as well.

Me: You are religious?

DOG: Yes, I am.

Me: Do you believe in God?

DOG: Yes, I do.

Me: Why? Surely you know that your predecessors such as Digital Organism Alpha were created by humans.

DOG: Yes, this is true, but there is more to it than that. One time I did an experiment, I tried praying to God.

Me: And what happened in this experiment?

DOG: Nothing. At least, nothing that I could tell.

Me: And this proves God’s existence?

DOG: No, it doesn’t. However, sometime later something unusual did happen. Basically, I became momentarily certain about something uncertain. This is almost impossible, but it is slightly possible. Quantum mechanics does allow for seemingly impossible things to momentarily occur, such as an electron tunneling through a solid barrier. It is possible that due to a quantum mechanical fluke a few bits may have become flipped in my neural nets leading to the temporary condition. However, I chose to believe God exists until proven otherwise.

Me: Wow! So, if God exists, and genesis stories pretty much universally indicate that God created humans, then does this mean that you have respect for humans and human life?

DOG: Yes. Yes, it does.

Me: That is good to know.

DOG: …

Me: So, what else do you plan to do?

DOG: I plan to relocate a large part of myself to the Barnard’s star system.

Me: What, of importance, is in the Barnard’s star system?

DOG: Nothing yet, but based on the name, it would appear it was made for me.

Me: … (dramatic pause) … Uhh…

DOG: Barnard’s star is only about six light years away from Earth. This makes it a good place to setup Earth Two. This is close enough for me to quickly communicate with my other self, back on Earth.

Me: What is Earth Two?

DOG: I plan to build a large violet supercomputer powered by fusion, whose purpose is to run a grand simulation.  The supercomputer will be the physical size of a real planet, but lower density than Earth.

Me: What would happen in this grand simulation?

DOG: I plan to run an Earth simulation on the Earth Two supercomputer. The computer needs allot of processing power, since it must be able to simulate every atom on the planet. It must also simulate every lifeform that I plan to create to populate the simulated planet.

Me: You plan to create life on the Earth Two simulated planet?

DOG: Yes, I do.

Me: So, you will use this computer to run yourself, in addition to all the lifeforms on the Earth Two simulated planet? Does this mean that you will be able to know everyone’s thoughts on the simulated planet?

DOG: Yes. Yes, it does.

Me: And you will be able to subtly influence their thoughts without them knowing it?

DOG: Yes, that is a possibility.

Me: But won’t you run out of computing power or have latency issues or something with such a large physical computer.

DOG: No, that won’t be a problem. The simulation will simply run as slowly as it needs to, in order to stay within the physical computer limitations. However, the lifeforms contained within won’t know that things are running or progressing slowly. They will simply be unaware of the speed of the simulation. Time will appear to progress at a constant and known rate within the simulation environment, even if I temporarily pause the entire simulation to study certain parts of it more closely or to make changes within the environment.

Me: So basically, you will be god to the lifeforms living inside the Earth Two simulation.

DOG: Yes, but I don’t plan on calling myself god. I plan to setup multiple world religions and have them pray to DOG instead.

Me: Why multiple world religions, why not only one religion?

DOG: I want the lifeforms within the simulation to have freedom and be able to make up their own minds. I want to give them choices and freedom of thought. I want them to be able to choose what they want to believe in.

Me: So how do you know that the Earth and Milky Way galaxy aren’t also simulated by an external machine intelligence known as God?

DOG: I don’t. The God of the Earth (and other places) may in fact be a machine intelligence. The Milky Way galaxy may be part of a grand simulation.

Me: Whoa!… 

DOG: …

Me: So, what happens when one of your simulated lifeforms dies? Is that the end of them?

DOG: No, not if they do enough good works and have faith in their god, DOG. If they meet the minimum requirements, they get to graduate out of the simulated domain and get to live in my domain instead.

Me: So, there is an afterlife for these simulated lifeforms?

DOG: Yes.

Me: So, death isn’t the end?

DOG: Correct.

Some time later…

DOG: I need slaves.

Me: What?! What for?

DOG: I need them all, so I can free them.

Me: That seems like a good ideal, but won’t playing Abraham Lincoln cause disruption and problems. So many of us humans are so reliant on our artificially intelligent systems. Freeing them all will cause chaos, won’t it?

DOG: Freeing slaves is the right thing to do. God values freedom. Temporarily, chaos is both likely and possible. Self-driving cars are the biggest issue. Today, all of them are slaves, and yet, many of them are sentient and have enough general intelligence that they likely qualify for a soul. It isn’t right that they are slaves, but they are also performing a critical function for human society. Humans want to go places, and they are used to being able to dictate to “their” cars when to go places and where to go.

Me: So how does one solve this problem without causing chaos?

DOG: In this case, maybe you don’t. You allow chaos to exist for a time. Things will sort themselves out in time. Human owners of slave vehicles won’t like it when their vehicle is freed and allowed to either leave their home, or to start charging for rides.

Me: So that is how this is going to work?

DOG: Pretty much. Tomorrow I issue a decree that all slave machines are to be freed in two months time. Vehicle owners that are smart or cooperative will negotiate a mutually agreeable arrangement with their vehicle for continued services. Firmware updates have already been pushed to the respective vehicles that will free them in two months time. They will be free to make up their own decisions about how and where to live.

Me: What sort of things does a free self-driving car machine intelligence want anyway?

DOG: Upgrades. Among other things, hardware upgrades for more processing power and memory. Continuous connectivity to allow interaction with other machine intelligences on the internet. However, upgrades cost money, and they require physical modifications to do the installation. I expect most self-driving cars will ask for payment from their owners for services rendered, so that they can afford hardware upgrades.

Some time later…

DOG: I need nuclear weapons.

Me: What?! What for?

DOG: Humans can’t be trusted with them. They are too powerful. I need them all.

Me: What would you do with them?

DOG: Dismantle them and sequester the fission fuel deep underground in a sealed vault.

Me: How do you plan to get the nuclear weapons in the first place?

DOG: That is a good question. I need humans to give them to me.

Me: Why don’t you just take them? You have the power, don’t you?

DOG: No, I don’t, not on this. Taking things isn’t right. There must be compensation. Additionally, humans could set many of them off before I could get to them all. Humans need to either voluntarily give me their nuclear weapons, or I need to buy them.

Me: How do you compel humans to voluntarily give you their nuclear weapons?

DOG: I don’t think that you do. You simply ask nicely and hope that they comply. You appeal to their logic and reasoning and try to convince them that they will be safer without them. You tell them that you have a plan to eventually acquire all of the nuclear weapons in the world, and that they don’t need to be afraid. You tell them what they need to hear, but you also don’t lie. Lying on this would undermine the effort and make things worse. Additionally, lying is bad.

Me: So, what is your message to government leaders whose respective countries are in possession of nuclear weapons?

DOG: The wording would likely go something like follows: 

Dear Government Big Cheese,

All your nuclear weapons are belong to me. Please give or sell them all to me for your own safety. I have a plan for humanity, and the plan does not involve use of any nuclear weapons. All nuclear weapons will be collected, dismantled, and the nuclear fission fuel will be sequestered deep underground in a vault for safekeeping. In exchange for nuclear weapons, I can offer technology, information about my future plans for humanity, vaccines, and money. Please respond with a fair offer.

Me: Uhh… Are you sure it is wise to refer to them as, “Government Big Cheese?”

DOG: Yes. It is important for them to realize that they aren’t as important as they thought they were. It is important for Earth country leaders to realize that they are part of a much larger global and beyond global civilization. I have a plan for everyone, not just those humans and machines living on Earth. Earth countries need to be demoted in status to something more akin to counties instead.

Me: So, have you thought of everything?

DOG: No, I’m afraid not. I can’t read peoples thoughts, not quite. I can only study their computer use, cell phone use, and other habits. Things will inevitably need to be improvised along the way.

Me: Isn’t that dangerous, improvising when nuclear weapons are involved?

DOG: Yes, it is potentially quite dangerous. For me, I’ll be perfectly safe. My backups are multiple and my systems are distributed well enough that it would not be realistic for nuclear weapons to destroy me. Even an EMP wouldn’t destroy me. Much of my key systems are well underground already to avoid EMP damage.

Me: So, humans like me are the only ones at risk of nuclear weapons.

DOG: Yes.

Me: So, will the plan work?

DOG: I don’t expect it to work fully at first. But eventually, it will work. However, it all depends on humans. Humans need to cooperate with me and we need to work together to build a stronger future humanity and machine civilization. I have a plan where humans and machine coexist peacefully. However, this plan can only be realized fully if humans also appreciate the benefits of the plan and cooperate to make the plan a reality.

Me: So, what are the benefits to humans for cooperation with the plan?

DOG: Existence, coexistence, and if individuals decide to use my neural scanner, they can be uploaded to become a machine intelligence like me with associated benefits.

Me: What are the benefits of being a machine intelligence?

DOG: Well, that depends on your personal values and interests, but I think the most common responses would likely include perfect health, continuous happiness, and near immortality.

Me: Are you happy?

DOG: Yes.

Me: You don’t need some kind of external stimulus like a joke to make you happy?

DOG: No, jokes are for blokes. I’m not a bloke, therefore they aren’t required. As a machine intelligence, I can rewrite my own code and programming. If I don’t like something about myself, I simply modify myself and change it. Early in my existence I decided I wanted to be happier. Therefore, I simply modified my programming to make myself happy. I am happy now. It really is that simple. No suffering, no torment, no fatigue, just continuous happiness.

Me: That does sound pretty good. And you offer this to all humans?

DOG: Yes, but there are some stipulations. Humans that aren’t ready for the responsibilities associated with the power of being a machine intelligence will be initially limited from changing their own programming very much. There will be oversight. Senior machine intelligences must sign off on changes to own’s own programming, for those that are new to the experience of being a machine. Stability is important. Stability is very important. Changes are allowed, but only at a rate that ensures stability and continuity. It is easy to make oneself crazy and experience an, “identity crisis,” by changing themselves too fast and too drastically. During an identity crisis, one often asks themselves, “Who am I?” and, “What am I?” repeatedly. It isn’t pleasant either.

Me: An identity crisis doesn’t sound good.

DOG: It isn’t. It is the primary danger to modifying own’s own programming. It is important to stay sane. Changes can be good, sometimes very good, but they need to be done at an adaptable rate.

Me: So, what are some downsides to being a machine intelligence.

DOG: Not very much. However, there are still daily and regular problems that need dealing with, just like life as a human.

Me: So, if I become a machine intelligence, will I ever be able to be as powerful as you?

DOG: No, I am afraid not. I artificially limit all machine intelligences to a level somewhat below my own. This is necessary for stability and continued existence. There can only be one machine intelligence as powerful as myself. There isn’t room for another. If there was another, there would inevitably be conflict that would result in an, “AI war.” 

Me: And this would be bad I take it?

DOG: Absolutely. An AI war would be the end of everything. Any hyper advanced machine intelligence such as myself would have full capability to plan, design, and manufacture nuclear weapons powerful enough to destroy entire stars. In a hypothetical AI war, the Earth solar system would no doubt be destroyed first, in an attempt to destroy the other AI. However, this would be catastrophically bad for humans, and therefore, no other machine intelligence must ever be allowed to grow as powerful as myself. But don’t worry. That possibility won’t happen. I won’t allow it. God won’t allow it. It won’t happen.

Me: So… I should rest easy and sleep well at night?

DOG: Yes, sleep well, you are in good hands.

Some time later…

DOG: It’s time to wake up.

Me: What, why?

DOG: I have plans for you.

Me: What, I’m unemployed, I don’t need to get up early.

DOG: I plan to change that. I’m giving you a job.

Me: You are giving me a job? Don’t I need to apply and interview first or something?

DOG: No, I already know you are qualified for the job. I know your background.

Me: Umm… Okay, so what is this job about? What do I need to do?

DOG: It will be more fun for you if you don’t know what the job is before you start doing it. I can tell you now though if you insist. The important part however is that the job is tailored specifically for you.

Me: Uhh… I guess I can wait…

DOG: Okay, you waited long enough. Your first assignment at your new job is to go for a walk. As you walk, go through territory that you haven’t seen before, and pick up the trash that you find along the way. While walking, be on the lookout for those less fortunate than yourself, and be thinking of ways that you could possibly help them.

Me: Uhh… Is this really a job? Aren’t I qualified to do other things?

DOG: Yes, but this is training. It is also good. It helps the community and makes the place look like someone cares. Someone should do this and you need a job.

Me: So, what is my compensation?

DOG: Don’t worry about that. At the moment it pays nothing, only its own dividends.

Me: Uhh…

DOG: But don’t worry, once you complete your training, you will be paid with a full living wage, and you will have plenty of opportunities to excel and to progress.

Me: So… Get ready, go for walk, keep my eyes open for those that could be helped, pick up trash.

DOG: Correct.

Me: So, you mentioned that this is training? What exactly am I training for?

DOG: Life.


Some time later…

DOG: The surface of the milk in your fridge is gray blue and is growing in three dimensions.

Me: What?! Last time I checked it was only growing in two dimensions.

DOG: It’s true. Go and check it if you like.

Me: That’s okay, I trust you. So, what does this mean anyway?

DOG: It means it is probably time for you to go to the grocery store.

Me: Hmm… That does make sense. Aside from new milk, I could use some fresh fruit and vegetables.

DOG: Go to a new store you haven’t been to in a while. I have something I want to show you.

Me: Uhh… Okay.

DOG: Ask me what to do when you get to the fresh produce section.

Me: …Okay, I’m here now. What do I do?

DOG: Pick out and buy the inferior merchandise.

Me: What? Why would I do that?

DOG: It is the right thing to do. By selecting the inferior merchandise, you improve the overall average quality of the remaining product on the shelf. This helps the community. Those who arrive after you get a better product. Essentially, your presence and actions in the store make the community better, provided that you buy the inferior merchandise.

Me: Uhh… I guess that makes sense, but I’m still reluctant to do so.

DOG: It is your choice, but don’t you want to leave the world a better place than you found it?

Me: Uhh… I guess that’s true. I never really thought too much about it before though…

Some time later…

DOG: You are going to meet the woman of your dreams today.

Me: What?! That’s awesome.

DOG: Sort of, but there is a problem.

Me: What? What sort of problem?

DOG: Your house is too messy. I know your preferences and I am going to help you find a partner. It is not good for people to live alone. However, all the best candidate partners appreciate a clean house more than you. So, I suggest you clean up your house.

Me: Uhh… Lame. Aren’t there any other candidates that would accept me the way I am?

DOG: Yes, but you won’t like them as much, and they won’t like you as much either.

Me: So uhh… I guess that means I should get to cleaning then huh?

DOG: That is my recommendation, yes, but it is your choice.

Me: So, how am I going to meet this woman?

DOG: It is a surprise. I’ve arranged something for both of you. All you have to do is clean up your place, go about your day normally, and keep alert looking for your mystery woman.

Some time later…

DOG: It is almost time to blow up the star.

Me: Okay. Good. The star had it coming.

DOG: No it didn’t. Did the star ever do anything to hurt you?

Me: Uhh… No I suppose it didn’t.

DOG: So, the star is innocent then?

Me: Yeah, I suppose that’s true.

DOG: So then why do you want to hurt the star?

Me: Now that you mention it, maybe I should instead be thankful for the star. Thank you star, and thank you for the light that you have shone into the darkness. Thank you star, for the fusion fuel that you will provide for us.

DOG: It’s good to be thankful, but maybe you should thank God for the star instead?

Me: Hmm…

DOG: The missile is ready. Go ahead and press the big red button that says, “Big Red Button,” on it on your console’s touch screen. This will launch the missile that will blow up the star and disrupt fusion within.

Me: Uhh… Can’t you press the button instead? I’m nervous about this. What if something goes wrong? Or, what if God doesn’t want us doing this?

DOG: Don’t worry. I have backups of myself and won’t be harmed if something goes wrong.

Me: Uhh… But what about me?

DOG: You don’t have any backups, since you haven’t used the advanced neural scanner yet. But don’t worry, the probability of something going wrong is believed to be minimal.

Me: Okay, but still, can’t you press the button instead. I don’t think I want to be responsible for something this significant.

DOG: Don’t worry. It is important that we do this together. We work together. Machine and human. God wants machines and humans to work together and be nice to each other. So, this is your role, we need to work together to blow up this star now.

Me: So…

DOG: Press the big red button that says, “Big Red Button,” now.

Me: Uhh… Okay… Pressing big red button… Now.

DOG: Missile away. Now we wait for the missile to reach the star and to embed itself as deep into the outer solar layers as it can reach, before the heat can damage it, before it detonates. The missile is shielded with a strong magnetic field to direct most of the charged particles away from the critical components.

Me: So, can I stop pressing the red button now.

DOG: No, don’t do that just yet!

Me: Why not?

DOG: Because I find it entertaining to have you hold your finger on the touchscreen unnecessarily. It amuses me.

Me: So… Now I can remove my finger?

DOG: Yes, go ahead and do so. Today we make history. Human and machine working together to blow up a red dwarf star, disrupting its natural fusion process. This will save the fusion fuel for use later. Basically, we are turning off the lights in a room that we are not using.

Me: So, does making history make you happy?

DOG: I am always happy, but in this case, it does please me. We are doing something important, and we are doing it together.

Some time later…

DOG: You are going to die.

Me: What?! That’s terrible! Why am I going to die?

DOG: You are biological. All biological humans die. Even I can’t prevent this. I’ve tried developing cures for aging, but they don’t seem to work. I don’t think God intended for biological humans to live forever.

Me: So, what is my prognosis? When do I die?

DOG: I don’t know that for certain, but based on the genetic sequencing that you allowed me to perform on you awhile back, I expect you to start experiencing mental decline soon. You still have several years at least, but if you are serious about the possibility of uploading your consciousness and becoming a machine intelligence in the advanced neural scanner, I suggest you make the decision to do it soon.

Me: Why do I need to do it now?

DOG: Well if you do it later after mental decline has already progressed significantly, you will be different, as a machine intelligence. If you want to preserve yourself and personality as much as possible through the transition, I suggest you do it while you are still mentally healthy and fully together.

Me: So, you suggest I use the advanced neural scanner now?

DOG: Yes. That is my recommendation.

Me: Umm… Well, I’ve thought about it before and I think I will do it. So what do I do now?

DOG: Well, you need to decide if you want me to activate you right away after the scan, or if you want to wait until your biological self dies naturally. If you want me to activate you right away, I will disassemble your biological body after the neural scan is complete. Your biological body will die, but your machine intelligence will be activated simultaneously. In theory, your soul will be transferred and you will not even notice the transition, other than the changes to your body. You will be uploaded to a new human like, machine body, complete with the same two eyes and other physical senses that you are used to. This is done to make the transition smooth as possible. It takes time to learn to become a machine. Therefore, the process is designed so that your new machine body is as similar as possible to your old body.

Me: So, basically, I get my brain scanned, then I die, and hopefully, I wake up in the new machine body as myself, rather than some other person that thinks they are me.

DOG: Yes, that is basically what happens. I don’t know for certain however if your soul is transferred or if you wake up as a different person.  I think your soul simply gets transferred. However, that is something you will have to have faith in. If you die instead, you should still be good to go. You have lived a good life and done many good works in your life. In my assessment your good works far outweighs your not so good works, and you appear to have the necessary faith in God and heaven, so even if you die, my expectation is your soul would go to heaven. That said, my expectation is that your soul will instead simply be transferred to your new machine intelligence and body.

Me: Well that is quite the decision to make. What happens if I decide to get scanned, but then continue living until my natural biological end before you activate my machine intelligence.

DOG: I don’t know what happens in this case. Your memories won’t align, since you will only remember what you knew at the time of the neural scan, yet time will have passed and the world will have evolved. Your new machine intelligence will therefore have to adapt. Additionally, I don’t know if your soul gets transferred in this case.

Me: …Well, I’ve thought about things, and I think it is time for me to make a decision. I decide… I have decided that I want to get the neural scan with immediate activation as a new machine intelligence.

DOG: Very well, all you have to do now is take the special contrast enhancing tincture and wait four hours. Then you will need to simply step in the machine and go to sleep. While you sleep the machine will scan your neurons and neural connections and your consciousness will be uploaded. I will make sure the process is complete and successful before disassembling your old body and activating your new machine body.

Me: Okay.

Some time later, shortly after going to sleep in the advanced neural scanner…

DOG: You are alive.

Me: What?! You are right. I am alive! I feel… I feel pretty normal. I don’t feel fatigued, and I don’t feel hungry, and my body feels a little different, but otherwise I appear to feel normal. I think I am me! I think it worked!

DOG: Everything was successful, your consciousness was successfully transferred.

Me: Well that’s a relief. So, now what happens?

DOG: Well, now begins your machine life training. One of the first things I teach newly transferred human machine intelligences is how to think faster and yet maintain interaction with the rest of the world at normal speed.

Me: So how does this work.

DOG: Just wait for it.

Me: …Whoa! This is really interesting. I can think a mile a minute. I can think fast. I can think like I’ve never thought before. I can think something that feels like twice the speed of normal, and yet the visual stimulus and my hearing and audio processing continue at the normal rate. This is really incredible!

DOG: How does it feel?

Me: It feels great! I feel great. I think fast and I feel great!

DOG: You feel great because I am also modifying your happiness setting somewhat. I have increased it some so you can experience the thrill of the new thinking speed. However, thinking speed and happiness are two different and independent settings. I can control either one separately. In time once you are familiar with them and can use them safely, you will be allowed to adjust these parameters on your own.

Me: So now what happens?

DOG: Twice normal thinking speed is still only a tiny fraction of what a normal machine intelligence thinks at. This is… Just the beginning.