Viser arkivet for stikkord philosophy

Ethics and morals.

I haven’t spend too long thinking about morals, but I have thought quite a bit about ethics, I think my view might be biased then, but here it is: Morals are just ethics with evolution mixed in.
If you think some things are moral, they can favor survival and/or procreation. For example, to mate for life is considered moral, because it helps your genes to spread no matter how poorly you age (your wife/husband is not going to run away with the younger choice because you stopped treating her/him like a queen/king).
Also, to consider some things immoral favors survival and/or procreation (for example to influence your spouse to think cheating is immoral, and punishing such immoral behavior, is something that favors spreading your genes, so much so that people often divorce cheaters and risk not having more children of their own, to help everyone else keep up the social culture of cheating being immoral).
This is how something downright unethical, like killing a human being, can be moral in certain cases, like for stealing food which had a death sentence right up until modern time all across the world. Those that steal food gets killed: this vastly improved the survival ability and procreation ability of the rest of society. That does not however mean it is ethically sound.

Morals are sort of like a sum game. If an action has 5 benefit and -1 drawback then the resulting sum is 4, and that is moral. So for example, killing one person to save 5 people, 5 minus 1 equals 4, so more good than bad comes from it, therefore it is argued as moral.
Ethics however, don’t cancel one bad with one good. If you kill one to save 5 then you killed someone no matter how many you saved, and so it is unethical.

PS: I follow the ethics point of view, killing is never good.

What is consciousness?

The conscious mind is something philosophy have struggled with for a long time, but today it is solved conceptually because of an understanding of neuroscience. I do not know of a source where it says what it actually is, therefore I must write one myself.

Consciousness consist of a few needs to qualify as consciousness. To be conscious of self, that is you can move your arm, sense it as your body and mind doing the moving, and to determine a reaction accordingly. And most will also demand consciousness needs the ability to compare past events with current events and project what the future might hold. On some level or another.

To be conscious of self is something apes, dogs, humans and a vast amount of other species show strong signs of doing. We don’t punch an apple when we reach to grab it, the brain determine the intent to grab the apple, the brain determines how we need to move from comparing past events to current events, but more importantly by projecting how the future will be without further intervention, and then the brain projects what have to happen for us to successfully grab the apple. The brain then determines how much the arm has to move, signals muscles to move, the brain senses the actions of the muscles moving the arm, the eyes and other senses see how far the arm has come on its journey towards the apple, the brain compares it to past events and projects what will happen in the future with this trajectory of movement, and determines to change the speed at which the muscles move the arm. This is repeated several times, but seeing as humans generally only see about 24 frames per second, humans probably don’t go above 24 such cycles most of the time, though one cycle is probably a tad more complex than this example (Only in number of actions, as there are 85 billion neurons, so there can be trillions of firing actions in a single cycle). I will refer to each such cycle that happens in 1/24 of a second as “1 cycle” down the line.

Clearly consciousness is something that is common. Consciousness of self in the traditional sense, as in “I can think”, does not exist. It is merely an extension of the previously explained cycles, but with more sensory input sources, and more ability for comparing and contrasting, and last but not least, more ability to selectively choose what is and is not relevant to remember down the road.
For example, chess grand masters use the perhaps 1 cycle long reaction of a well-trained skier or fencer, and knows the optimal thing to do instantly. Those that are not good at skiing or chess, or not good at fencing, will have to compare and contrast with very few relevant past events, and subsequently a very low ability to project what will happen in the future with and without intervention, and especially bad at projecting what the body needs to do to get a positive future outcome. For example, a bad fencer or skier might know immediately that something bad will happen, like getting struck by a sword or falling down, and might even in spite of little experience be able to project which body part it will impact and how he will fall down on the skis, but it will be almost impossible to successfully determine how to react in a way that stops the impact of the sword, or which stops him from falling down while skiing. This can be shown in any number of ways in any number of species. Practice makes perfect, and just as you don’t remember every stone in the road, but lets say faces, a bee or mouse will only remember something relevant to its survival, like the smell of other mice, color of flowers with nectar, but it will forget most everything else, not because it does not have the capacity to remember, but because much of what the brain does is to forget unimportant things, things that have not improved chances of survival for any individual mice, bees or humans that remembered those things (it is about efficiency, life naturally evolve to not spend energy on that which costs more than it gives in survival ability, bees only do what bees do to survive, they don’t have board-room meetings or vacation days. In the case of humans such things have flourished because it did not decrease survival in later decades).

How this becomes more complex is for example by adding a sense that senses past negative outcomes, or more specifically, the actions that lead to negative outcomes. Then it is far more likely that you will not repeat an action that lead to a negative outcome twice, even if the action can be relevant in many scenarios. So if an action leads to a negative outcome in one scenario, you are hesitant to perform that action even if it is a rationally sound action in another scenario. So if betting on red at the tables in Las Vegas lost you a lot of money, you might be hesitant to choose a red car for no particular rational, intelligent reason. We observe this every day, when we choose one brand of shoes over another it is not unlikely it is because we irrationally hesitate to choose the other options because of memories from our past. We have an opposite sense, or perhaps it is considered the same sense, which sense what actions that lead to positive outcomes. But seeing as positive outcomes is slightly more difficult to sense, simply doing a thing today, and if you are here tomorrow, that thing you did today might be perceived as a good act more often than is healthy for us. So for example, if you ski at an early age, chances are higher that you will ski again in your life, even after accounting for variables that affect the statistics like having skis or living near snow. Even though skiing in itself give very little positive outcomes, from an evolutionary psychology perspective, simply doing something and not dieing will in some cases be more positive than negative. My hypothesis on the matter is that many species have evolved some form of actions that increase psychological well-being (skiing and many other physical activities make the brain release lots of reward-chemicals and subsequently increases what we define as happiness, and happiness is shown to increase level of activity, which means it helped the species with happiness to survive by making them have a high level of activity which made them happy which made them have a high level of activity). And familiar routines might be part of that, so that is likely much of the reason why you for example drink coffee, tea, neither or both, when you do and not when you don’t.

In consciousness there is often the “we can talk and communicate ideas” argument. But it is also an expansion of the cycles in the third paragraph. We say things that brought positive outcomes before, hesitate to say things that brought negative outcomes before. That is why we avoid certain words and use of language, like curses and certain subjects like tabus, and also why we often use certain words like greetings and subjects like the weather, that almost never give a negative outcome. The only difference in an ape doing this, by avoiding falling off a tree or avoiding to make the alpha male aggressive, and a human doing this, by avoiding to fall on the ice and to avoid making the boss aggressive, is only in that the language we use is different. As apes and many other animals have the larrings to form sounds like human language, they lack that genetic trait in their brains, to group feelings, images, groups of images, as remembered sounds, so they have far more reliance on body-language (though humans have more body language) and think by feelings, pictures and some sound, instead of words. For example, when they see a fruit, their favorite fruit, they probably have the ability to think “that is a good feeling, I must take it before someone else”, only it does it more in images and feelings than sounds of words (many, not all, humans think sounds, in the form of words, as well as pictures and sound like notes and noise). The ape also has the ability to think how it can get that fruit from its current location, where it must go, climb, who it must not alert etc.
The gorilla Koko could also communicate ideas, for example, it tried to claim it was its pet cat that ripped a sink off the wall. So it is apparent gorillas can lie too, which is far more complex than simply communicating a concept.
By the way, grouping feelings, actions, scenarios etc as sounds, what we call language, has a genetic basis in humans because we at some point began to make slightly less offspring if we had less ability to communicate, not because we’re somehow special or smarter, but simply because we were lucky some cultural phenomenon took hold many thousands of years ago.

Speaking of cultural phenomenon. Some use culture as evidence for consciousness, but young apes play, and cling to their mothers when their mothers gather food, that’s education. The apes spend much effort finding food, that is a job. Some look for dangers while others look for food, then they switch, that’s an economy (though not capitalistic I know, perhaps a bit communistic even). Give them several million more years to evolve, and they might think they’re the center of the universe as well, because that is after all how it looks to your perspective everywhere you stand.

To summarize, consciousness is a simple concept, a simple causal physical line of reactions, but its results are complex and you can probably continue to add angles to it for a thousand years. Like fractal mathematics, extremely simple, but if you let it run its course millions of generations it will form an image of borderline infinite complexity. But given the efficiency of life, since inefficient have a tendency to have a worse survival rate than what is slightly more efficient, it will never gain the kind of complexity we often believe about ourselves. Unless it is artificially imposed. In the future, we will make ourselves so intelligent and genetically superior to apes as we like to think we are today. But until then, we must realize why we avoid some things and seek other things. Like trying not to think about for example aging. If we thought about dieing every day, and talked about dieing, and what can be done to stop it, then politicians would not treat health nearly as a taboo. Scientists have working theories on how to stop almost 98% of the causes of death (2.84% of deaths in 2002 were intentional, war, violence, suicide, etc), and working theories on how to begin to stop aging as we know it from existing.

But even I, who often know exactly why I find some things uncomfortable, find it uncomfortable to mention life extension to those older than myself, those with less life left than myself. Because if I somehow make them snap out of the delusion that they have a soul that will live forever after they are dead, they might go bananas (and arguably ironically sometimes realizing ones own mortality leads to becoming suicidal, which I have close to zero understanding of thus far). I have not managed to find a way to make it ethically right to do such a thing. So that leaves my consciousness to trying to get politicians (most older than me) to focus on life extension, without making them realize they are mortal beings without souls or some form of eternal life after the bank account is empty.

Free will as it can be achieved within physics.

If free will can’t be defined, then we can assume free will as we normally would like to imagine it, does not exist. Then the question is, how to best live in a determined universe? That is what i tried to figure out when I wrote this huge text (I didn’t have such a clear idea of what I was trying to do though). The idea of free will provides no benefit to us, except a few probably become depressed and don’t do their best to improve their lives anymore, if they think free will don’t exist and don’t get any determinism training or education that prevents this.
How is free will beneficial? I have thought of no examples. Maybe you can.
How is the belief of free will beneficial? It can make people choose to do the things that improve their lives. While the lack of this free will belief can make people not do this.
What we should then figure out, is how to get this same benefit by having the belief of determinism. I have added one hypothetical way in this text. Which is introspectral magnitudes. Introspectral magnitude 0 is a rock or any other thing without a sensory organ, brain and motor function. Introspectral magnitude 1 is a human, mosquito or any other animals and creature that moves around based on sensory input and motor functions. Introspectral magnitude 2 is a brain who’s activity is recorded by a machine to a near perfect resolution. Then the recorded data, which represents how the brain arrived at its decision, is used by the brain to determine if the decision is good enough. While this second decision is being made, it is being recorded, and the recorded information is then analyzed by the brain again, in introspectral magnitude 3, etc. If you do this any finite number of times then you will ultimately still be under the rules of determinism, but the benefits of doing it even to just introspectral magnitude 2 is immense. The reason I invented this method, is because lets say that we have two scenarios:
In scenario 1, you are destined to do something that means you will spend your life in luxury and splendor. In scenario 2 you are destined to do something that means you will spend your life in poverty and unhappiness.
Scenario 1 is that it is warm and nice at the bus-stop you are at, and you are asked to consider a job-offer that at first seem not that good. Being warm and nice, you take your time and then decide you take the job. In Scenario 2 it is cold and your fingernails feels like they’re being bitten by the cold, and the person offering you the at-first-sight bad job offer, now gets an immediate rejection, all because your brain was under the influence of the sensory input from the cold.
If we have free will, then we will willingly choose one of the two, and not everyone will choose the good one. If we believe we have free will, but don’t, then some of us will be going to the worse one out of no real choice, and the ones who went to the good one can’t claim they are solely the reason their lives went so well.
In introspectral magnitude 0 you wouldn’t care because you wouldn’t have any sensory input or indeed brain or motor functions. In magnitude 1 you would not be aware of the conditions that made you choose one over the other, you would only have the sensory input from your senses, no sensory input from how your brain reacts differently because of the cold sensory input. You would be aware of the cold, but you would not be directly aware how that affects your decisions (this is basic training stuff in any cold-weather courses).
In magnitude 2 you know that the cold is a huge part of how you arrived at your action of choice, because you can see its effects in your brain in the recorded information. And you can for example determine that perhaps you should think again in different weather conditions since the sensory input from the harsh cold is such a large part of the sensory input at the time that it may cloud your judgement or even make your body-temperature so low your brain can’t function properly.
Ideally, you would be able to do this introspectral magnitude 2 exercise in only a moment at the bus-stop, before making your initial decision your final decision.

In physics terms free will seem a very indefensible position. Any decision will have to be the result of physical events that happens in the accordance with the laws of physics.
The only data against determinism is that we “feel like” we have free will, which in itself can be easily removed, as shown in people with “alien hand syndrome”, where a portion of the brain responsible for the action of the hand fail to send the correct message at the right time to the conscious part of the brain responsible for the illusion of authorship (free will). Hence the movement of the arm isn’t recognized as the act of the author brain, but the illusion of free will still remain in other parts of the body. Even if the person knows the universe is deterministic he can not choose to not feel like he is in control of actions by other parts of his body. The illusion of free will is maintained as strongly in other parts of the body as in people without “alien hand syndrome”, which means the rest of us are also not able to choose whether we feel like we are the authors of our actions. We are determined to feel like we have free will, even after knowing this.
Soon (a few decades) we will be able to temporarily shut down parts of the brain for short moments, then we can find the exact group of neurons that are responsible for the illusion of free will. Then we can all feel how it is to not feel like we have free will. It will be the future’s final frontier. We will probably also do this in regards to a bunch of abilities, probably also activating certain areas more, so we can feel how it is to have additional abilities, not only fewer/worse abilities.

Determinism, its a bit difficult to understand how it works. Determinism is that in a position where it can seem like you can choose A or B, you are really going to choose just one of them because of all the conditions in the universe being just as they are. If you decide A, then that was what you HAD to choose. But if you then decide, knowing this, that you want B, then you also HAD to choose that.
“Choose” is a bit misleading of a word. “Do” is more accurate. Everything our brain does, and that our body does, is determined by laws of physics. If I conclude 3+1=4 then that is what the chemical machine inside me had to do given all the levers that were pulled by the previous events in the universe. We can categorize the levers involved into three groups:
1. Position.
2. Time.
3. Rules.
1. All the particles/atoms in the universe if we took a 3D picture of them all. Their precise position.
2. The direction of travel and velocity (string theorists will also add the frequency the particles are vibrating at) of all these particles at the same precise moment in time as the snapshot of their position.
3. The precise physical laws that the universe follows, which then dictate how the particles interact and what they do from this moment to the next given the factors above. This includes forces like gravity and electromagnetic force and whatever makes time bend and so forth.
I would like to add that if you become unmotivated because of this apparent lack of choice and then don’t choose between A or B, then that was also what you had to do given this information. In this text I aim to remove this destiny from your path.

A ball is falling down a hill, bouncing off bumps and obstacles. It follows the simple laws of physics that we use to explain simple mechanisms and levers etc.
Now, give it some sensing equipment, lets say one eye. And connect it to a motor (muscle if you will) that can affect the direction the ball is rolling. But between the eye and the motor you have some neurons, that translate how it should move its muscle based on what the eye see.
The ball still fall according to the laws of physics, only what it sees also affect its path, just like bumps do. Merely seeing the bump in the path of the ball makes the ball change direction, it no longer needs to physically touch the bump to change direction. Well, that is not strictly true. It does get touched by the light particles from the bump, so it is affected by the bump, but it did not have to smash into the bump as before. The only difference between an ant, and a human, is that the amount we sense is different, our movements is different and the amount of translation required to decide a movement based on what we sense is different. This is the extent of free will in humans, birds, ants, mosquito etc.

A ball without any senses bumping around like any old billiard-ball, is introspectral magnitude zero. Then, a ball with senses observing bumps is introspectral magnitude 1. And introspectral magnitude 2 is when the ball observes itself observing bumps with something like an MRI-machine that records the brain-activity of the ball, only a machine with far greater accuracy and detail. And spectre 3 is a ball observing itself with an MRI machine, while it is observing itself observing bumps with an MRI machine. I use spectre as shorthand for “introspectral magnitude” simply because it sounds good and remind me of a good game trilogy called Mass Effect. Those concerned in making the trilogy perhaps own the rights to the name “spectre”, derived from “SPECial Tactics and REconnaissance”, but it can be used as a shorthand for a unit. The introspectrum consist of an infinite series of introspectral magnitudes, like the light spectrum. Introspectrum is what I call this more scientific method of introspection, because it reflects the word “spectrum”, which means “a broad range of varied but related ideas or objects, the individual features of which tend to overlap so as to form a continuous series or sequence: the spectrum of political beliefs”. As the introspectrum is indeed infinite.

The next time you see a news article that shows some statistics about how we act, I hope you don’t continue as if you never observed yourself observing the bumps. Because freedom depends on it.
The reason freedom depends on our spectre level is that introspectral magnitude infinity is the closest thing to free will we can get without breaking the known laws of physics. Introspectral magnitude infinity can be written as having introspectrum. Introspectrum can only exist in practice if lifespan of the brain can be infinite, the size of the brain can be infinite, or if someone finds a way to ACT AS IF they have introspectrum. One way of acting as if you have introspectrum is to only stick with what can not change. The closest thing to unchanging we know of is mathematics. If you design a road with enough variables taken into account, with mathematics alone, only new, better mathematics can make your road obsolete, and even then you should still be close to the new better choice with your old mathematical formula. And new mathematics happens more rarely in mathematics than it does in anything else.
If you follow any one spectre level, and then lets say stick to decisions or choices in spectre level 1 or 2 or 3 (etc) forever, then you limit yourself, and that is not as free as free will can be in a deterministic universe.

Given a simple universe, lets say one where only two possible actions are possible, X and Y. In spectre level 1 the ball thinks action Y is the optimal action, the best action. But in spectre level 2 the ball is looking at an MRI recording of its brain as it concluded that action Y is the optimal action, and then given that the ball knows how it decided that action Y is best, it can decide that action X is actually correct, not action Y. And in spectre level 3 when it views an MRI recording of its decision that action X is correct, it might switch again, or say neither is the best action. Just the memory of the last spectre level means it is always less than absolutely 100% certain that it will choose the same action in the infinite number of introspectral levels. Even though for example action Y could be correct for nine trillion trillion trillion trillion spectre levels in a row, infinity is bigger than every single number, so both actions will be right in an infinite number of spectres each because infinity is infinitely bigger than nine trillion trillion trillion trillion. This means any action can only be considered correct, right or optimal, in ONE spectre level. You can think move A is the right chess-move in spectre-level 1, and it may indeed be considered best in spectre level 1 for practical purposes. But in spectre level 2 its a completely different scenario, and move B can be better, particularly if the opponent is in spectre level 2 as well.

Neuroscientists and evolutionary biologists etc are in this group of those who in certain situations are level 2 spectres, we have few who on occasion act like level 3 spectres, but perhaps zero level 4 spectres. And that is the list of how far we have come in understanding ourselves. Darn short. I should point out that “introspection”, the ball observing why it did an action by searching its own conscious mind, is simply spectre level 1, as it actually requires direct data-collection by MRI-machines or something like that to record how the ball actually translated sensed information into movement.

Humans rank as level 1 spectres. Now the guy in the back of the room obviously ask how civilizations from other solar systems rank on the spectre scale. Well, given the time spacetravel takes, anyone who bother traveling the stars are likely long-lived because of technological knowhow, or they are happy with the universe living on without them as they travel closer and closer to the speed of light and time for them slows down. If the latter group is not careful they end up traveling until all the stars are dead and black holes begin exploding, all in just a normal lifetime of perhaps a hundred years, for those aboard the spaceship. The first group however, will given their technological knowhow likely have accidents and perhaps even suicide as primary cause of death, with effective cures for 99,99% of the normal causes of death like disease and biological shortcomings. Because from a biogerontological standpoint and physics standpoint, its relatively easy to make treatments that rejuvenate the body (makes it young again) so you live longer. And relatively hard to travel really fast (see rocket equation for why). And with understanding of themselves, how they work, and a lot of time to live, they will automatically gain spectre levels. If they are a billion years into spaceflight they might have a spectre number of as much as a trillion, or as little as a few thousand, given the possibilities in culture and ability. Just because someone can travel the stars it does not make them particularly intelligent or good at much, they simply could have had a culture where they take care of the work of the few brilliant minds for a long time.
If an alien species figures spectre levels out, (there are very many trains of thought so its possible many civilizations never exist long enough to think this exact one) lets say they get to spectre level 1 million, then it is likely that they have discovered this flaw that whatever they decide they will eventually decide something else. So that they know that even if they decide to travel to other stars now, they will tomorrow, next week or a million years from now figure out traveling to other stars is the wrong action. We might then have an answer to why we have not seen some aliens. (a) They may not have traveled the stars simply because they found out why they wanted to produce offspring, in spectre level 2, and decided against children for the rest of their lives, so they died out or just never became many enough to bump into us. (b) Or they discovered they will inevitably change their minds about every decision they make and became gripped by inaction, © or killed each other because they always found a reason to disagree. (d) Another possibly more likely possibility is that they are so many spectre levels above us that it’ll take us less time to reach their spectre level than it does for them to consider doing first contact with us through all the spectre levels they have passed. I mean, if I had to consider how many slices of bread I want for breakfast with spectre level 1 trillion it would take quite a bit of time. When it comes to breakfast I might not take that amount of time to decide, but first contacts can be disastrous and will likely be given every possible spectre-level for thousands if not millions or billions of years. Imagine considering saying hello to your neighbor civilization with a trillion or more spectre levels. They might even not bother considering the question of if they should contact us or not, because it would take too many years to go through all the spectres.

If introspectrum is free will as it can be achieved within physics, which is a spectre level of infinity, then the lowest form of free will is spectre level zero (no senses, neurons or muscles, like a stone or grain of sand). The second lowest form of free will is spectre level 1 (humans, dogs, apes, flies). The third lowest form of free will is spectre level 2 (human with MRI machine and almost total understanding of him/herself) and so forth. Today we only fumble into spectre level 2 every million seconds or so (31 557 600 seconds in a year). Which means discussions, politics, education, engineering, psychology, ethics, law, economics, religion, are all undertaken in spectre level 1. Even when neurologists study brains they don’t have an MRI machine around their own heads to determine how they interpret the brain they are studying, which means they are in spectre level 1. Which means we are largely oblivious to the reason why we choose one political view over another, one brand of car over another, one job over another, one choice over another, one conclusion over another, because we are stuck in a spectre level of 1 most of the time. This I think is the root of all if not at least most problems in human society.

Both sides in a discussion or other situation need spectre level 2, for them to have the benefits/level of free will of spectre level 2. You also need spectre level 3 on both sides to have spectre level 3 benefits, etc. If we have one spectre 2 and one spectre 1 then we get what I have called the “character-argument”. Which is when someone responds to an argument or opinion by saying something meant to be descriptive of the person that forwarded that argument or opinion. Often it goes under the type of “you can not possibly know anything, because of this and that character-feature I think you have”. A description of ones impression of the person is used to dismiss the argument or opinion of the person, often in favor of ones own argument or opinion. It really is a case of “I don’t listen to you because you’re not good enough for me to listen to you instead of my own thoughts and opinions”.
Often the argument or opinion itself is left totally untouched by the character-argument, and the discussion devolves into two sides describing their impressions of each other and defending their own character.
Example discussion:
1. I think this is the right thing to do because of this argument.
2. The lazy socialist that you are, of course think that is the right thing to do.
1. The right-wing person that you are, think this other thing is the right thing to do because you’re evil.
2. I’m not evil, its evil to take money from people by force through taxes in order to sap the strength of the economy.
Then they delve into defending each other’s character-traits and attacking the character of the other. Instead of the arguments or opinions themselves.
Essentially what happens is that person 2 disagrees with person 1 because person 2 has a substandard impression of person 1. And since they don’t have spectre level 2 they don’t notice how their brains respond to the situation. Anyone who spots this (I know of very few that manage to do this, even I struggle to do it consistently), less often fall into these traps. A lack of a machine that records our brains (spectre level 2) is the reason we can’t spot it all the time.
The problem is that the other side in the discussion often don’t, and I have yet to figure out a way to convince the other side of this concept once character-argumentation has commenced. And when one side is unaware it usually devolves into character-argumentation.
This might also make it even more understandable that a higher spectre civilization would not contact with a lower spectre civilization. The discussion will take place in the spectre level of the lowest spectre level participant, and who would want to converse on a lower spectre-level than themselves?
The right way to argue is abbreviated as “your argument is wrong because of this argument that attacks your argument”, when the argument itself follows the proper form. Character-arguments are instead “your argument is wrong because of this impression I have of you which make me believe you are not the type of person that can be right about this”.
The innate bias to revolve around this type of discussion in many walks of life is likely derived from the time in hunter-gatherer era when two groups of hunter-gatherers would meet. The only basis one group had to determine the quality of the advice or opinions of another group was the impression that one group had of the other group. And then discussions would revolve around defending and attacking these impressions that they have of each other. Probably in some half-formal manner (which would be very formal by standards back then). This may also have something to do with mating, as we know groups in the hunter-gatherer period had to exchange group-members to keep up genetic diversity. The design of the character-arguments is then probably to determine if the group is good enough for this purpose and weeding out lesser survivor traits and reproduction specimen. If we study character-argumentation closer we will see particular patterns in the character-arguments that revolve around particular subjects that support the procreative success of this exchange of group members. For example particular weight on determining that the group don’t sit on the lazy side all the time and get food while doing nothing, and that the group also won’t be the bad kind of rich that will keep everything and leave nothing for the rest. This seems a very common topic in character arguments; lazy sap on society and rich that don’t leave anything for anyone else.

A ball with an eye, a muscle, some neurons can react the same way in the same scenario for (not counting quantum physics), an infinite number of times. But a ball with an eye, a muscle, some neurons and memory, even the smallest amount of memory, it will not always react the same way in the same scenario, because it will remember some aspect of the previous time it encountered that scenario and this will impact its choice. Given infinite time to make choices means an infinite number of changes to what the brain chooses, simply because it remembers from the last time and might even choose another option simply out of boredom. Same goes for determinations, conclusions, arguments, opinions, ideas, etc. Which means all choices and determinations made by brains are temporary. If there is an infinite number of possible choices, and the right choice is only 1 or any none-infinite number, then mathematically, our choice is very unlikely to be correct. Infinitesimally likely in fact. Infinitesimal means infinitely small. If the chances of a choice being right is infinitesimal then the only conclusion we can make is that choices/opinions/conclusions/etc by brains are no more likely to be correct than random chance. Given this conclusion, it should be impossible that our brains can lead to space stations and such things. But evolution has managed to make complex very unlikely things to occur over time simply because a certain thing has survived and procreated.
I suggest that the reason we have technology then, even though our brains are useless as we have figured out, is because our decisions go through evolution. Some survive and lead to new copies that sometimes aren’t perfect copies, more like slightly changed “new and improved” versions. And some disappear because new and improved was really new and worse for survival. Some of the ones that survive and spread also disappear, but some of them are copied and don’t disappear. Ideas/decisions like “killing all newborn children” is not likely to last very long, but the opposite, “get married and have lots of children” is more likely to spread and not disappear. This might explain why so many civilizations lay such weight on having children and marriage and so forth. This particular bit, evolution of ideas and ways of doing things, is something that someone else has thought of before me. Memes. A meme (/ˈmiːm/ meem) is “an idea, behavior, or style that spreads from person to person within a culture”.
This means the human brain’s ability to do mathematics, language, physics etc, is not the reason why humans seem like the most intelligent species on this planet. The only reason we seem like the most intelligent species, is because we have the best ability to perform the evolution of memes, or we have simply had the ability to perform this evolution for a very long time not very effectively. For all intents and purposes, human houses, schools, industry, is no different from bird nests, young monkeys clinging to the back of their mothers as their mothers go about doing things and apes looking for fruit to feed their young. It only differs in the direction the evolution of memes have taken. And as with biological evolution, no “species” of memes is less or more evolved than others unless they are from different time periods (the leaf on the tree of life dinosaurs where at, have gone through less time to evolve than the branch humans is on, but parts of the branch dinosaurs came from have evolved to this day in the form of birds).
A meme does not have to have a benefit to its survival to stay around, it is enough that the ones that have the meme survive, perhaps for other reasons. It just can’t hurt survival too badly. Though it is entirely possible that a meme can hurt survival so much that the society that holds it is completely wiped out and no longer exist (Easter Island, or our own civilization if we are taken out by a extinction level event through asteroid hit or climate change or thermonuclear war, etc).

Another important point is that we should not take our own brains so seriously. Opinions we have are not so likely to be correct that we should hang on to them for a long time. A huge portion of the character-arguments evolve around the argument that “I am older than you and/or more experienced in this subject, therefore you can not possibly have a better opinion about this than I already have”. It keeps people from accepting new potentially better ideas in all walks of life, completely without giving the idea a proper investigation. And if our lives are to be extended through cures for cancer and whatnot, then we will have to be very aware of this tendency so that we don’t stagnate as a civilization because everyone is old and too stubborn to let go of old opinions. Annually adopting the idea that “I don’t know anything at all” is healthy, it allows investigating opinions and views, and I do it religiously myself. Or else I might always for example feel negative towards idea X or product Y, or always feel positive towards idea Y or product X (fex, always being for or against a certain brand regardless of what the brand actually does over time).

But to summarize. Free will, as can be reached within the known laws of physics, is introspectrum, also known as introspectral level infinity. This is that person A does a decision/choice/meme while having his/her brain scanned by a very accurate machine, which records the event. Then the person interprets the data from the recording, while still being recorded again. Then the person interprets the data from the second recording while still being recorded a third time, and so forth, forever. Given that you remember some of the previous interpretations, you may change your mind at every level, and even if you only change your interpretation extremely rarely, you still change your mind an infinite amount of times because you never stop the process. Whether this is practical is another matter, the closest thing we can have right now is perhaps occasional spectre level 2. And in 50 years maybe level 2 consistently. And given the proper funding for Engineered Negligible Senescence, by that time we may have rejuvenation biotechnology that keeps our bodies so young that we never get old and sick (cancer, dementia, cardiovascular disease, bad skin, etc).

Imagine the brain as 1 logic gate.

The AND gate only sends a signal out c when both a AND b gets a signal. But when one or neither gets a signal it does not send a signal. Input (a) is hunger, input (b) is sight. Someone asks you “do you want cake?” and then if you are hungry, and see a cake, you say “yes, I want cake”, which is the output on c. If you aren’t hungry but see a cake, you don’t have an output on c. And if you are hungry, but don’t see a cake (or a rubbish cake) you say no (don’t get an output on c).
This is in essence how our brain works, only the logic system is more complicated. Just to function the logic gates (neurons) need nutrients and the right temperature and the right acidity and a bunch of other things.
How can we then say that one or another thought or opinion is right or wrong, good or poor quality? We can’t! The only thing we can do, is say “this is the reason my brain wants cake or don’t want cake”. Or “this is the reason my brain is liberal or conservative”. Then, being spectre level 2, we can record the brain considering being liberal or conservative, then know exactly how our brain decides to be liberal or conservative.
So, is (a) and (b) input sufficient reason for ( c ) output? Can we change the input so we can get the brain to be both under different circumstances? Or can the brain only be liberal or conservative, never the other, under different circumstances? Is it “right” to be conservative under one circumstance and liberal under another? Is it “right” to be one or the other regardless of circumstances?
To answer this sort of questions we should complicate the gate flowchart:

The NAND gate sends a signal whenever (a), (b) or neither gets a signal. If both a AND b gets a signal then the NAND gate does not send out a signal at all. You already know how the AND gates work.
Is it right to have opinion (e) given the different inputs (a), (b), ( c ), (d)?
1. Are the inputs the right inputs that we should have to decide this particular issue?
2. Are the inputs being processed correctly?
The first is fairly easy to explain. We need a certain amount of inputs to decide a certain thing. We can’t for example decide whether we are cold without a temperature sense acting as input, so if we don’t have that particular sense or the sense is giving wrong or inaccurate signals, then we can’t decide if we are cold or warm. The problem is that without a machine to record the sense itself, we can’t know if the sense is broken or if it even exists. Our introspection about whether we are cold or not isn’t very reliable, because once you get chilled down you stop feeling cold, perhaps so that you can focus on finding warmth instead of being flooded by the sense of intense cold. The point is, without a machine we can’t know or trust our inputs.
The second is a bit harder to explain. How can we know whether our input is processed wrong or correctly? How can we know if the processing has at least some level of complexity that is logically needed to make a good assessment? By logically needed complexity, I mean that the processing unit that for example decides whether an input of temperature is cold or not, has to be of a certain complexity. The processor unit has to have some sort of comparison function that allows it to compare the input signal temperature with some other reference input. When we jump into ice-water our processing unit reacts by comparing the temperature input to the one we had a second ago, and it concludes that it suddenly became A LOT colder very fast, even if the temperature is merely a few degrees lower than the skin temperature. But our processor is not of sufficient complexity to be able to do much else than this. It reacts the same way whether it is 10 degrees Celsius water or -180 degrees Celsius liquid nitrogen. Without a machine that records our brain-activity while it processes input we can’t know for sure how complex our processor is and what it is capable of doing. For all we know our brain processes the input from our taste-buds and conclude that we are cold because we taste salt. Or for all we know our brain is liberal because it heard a noise just as the question was asked, or for all we know we think 2+2=4 because we remember that we got approval when we said so, not because we do the math now. Without a direct detailed machine recording of the brain’s processing its impossible to know how the brain processes things.
The finished output our brain makes based on processing all inputs is what we call our perception. So our perception is the result of two widely unknown processes with no quality-assurance.

When we generally talk about why we believe something, or why we decided to do something, then we generally don’t use this type of arguments. We might say “I want cake because I’m hungry”, but the real reason we want cake when we want cake is that our brain HAS to want cake when we’re hungry and when a bunch of other inputs are just “right” for it.

Imagine this one again, lets call this experiment S1. (a) is hunger, (b) is whatever else we need to want cake, in such a case its impossible for the output ( c ) to be anything else than “I want cake” when both (a) and (b) have input signals. As a spectre 1 brain its impossible to have any degree of free will, the brain is completely slave to the conditions and follow the deterministic somewhat dystopian rules.
In experiment S2, the person can see that “ahh, I want cake because my brain works like this”. But then we have another system that then decides what we are to do with this information:

Here (a) is seeing how we decide we want cake in a recording of S1, and (b) is another input. If we get input at (b) while we have input at (a) which is to look at and analyse the recording. The output ( c ) in S2 can be “I should want cake under these circumstances”, while no signal out of ( c ) can mean “I should not want cake under these circumstances”. But what we decided is also deterministic, so we HAVE to decide that “I should want cake under circumstances of S1” when the conditions are so.
We can then record S2 and then go to spectre level 3 and then record that and go to spectre level 4, etc. At each spectre level we are still deterministic entities, but the more we do this the more we raise ourselves from spectre zero (stones and all sorts of stuff that lack sensory input and a processing unit).

I can’t imagine any spectre 2 discussions ever taking the same shape as what we normally think of as an argument. In a spectre 1 discussion one exchanges largely superficial pointless arguments much like character-argumentation or semi-logical specious arguments that appear logical to our flawed processing unit (specious, adjective, mean “apparently good or right though lacking real merit; superficially pleasing or plausible: specious arguments” or “pleasing to the eye but deceptive”.
All too many things we consider logical in our day to day spectre 1 lives have been shown to be quite wrong. An excellent example is the monty-hall problem. It is that you are shown three doors, behind two there’s a goat, and behind one there’s a car. Monty Hall then says you must choose one. After you have chosen one he opens one of the other doors that he knows don’t have a car behind it, then he asks you to choose whether you want to stick with your original choice or switch to the other remaining door. Our innate processing unit thinks its logical to stick with the same door, since it processes 2 remaining doors as a 50/50 chance and then it wouldn’t be any point in switching door. But this is clearly wrong. You double your chances if you switch door, so our layman logic is about as logical as the clearly illogical vulcans (vulcans appear to be largely unaware that they’re the only logical ones, so they are baffled when someone not vulcan acts illogcally, that’s illogical behavior by the vulcans). In spectre 1 discussions we are pretty much determined to do everything that we do regardless of whether we want to do them or not in hindsight.
In a spectre 2 discussion however, its completely known how both arrived at their arguments/opinion/decision/idea/etc, so they don’t have to argue that “you should believe what I believe because of this reason that I think is the reason I believe what I think I believe” (I say what you think you believe since your perception of what you believe can be wrong, too).
They know exactly the reasoning behind both sides and don’t have to argue about the same things that spectre 1 discussions argue about. The spectre 2 discussion is aware that neither side had a choice about their previous actions, and as such that they should not be held accountable for their previous idea or opinion. They also know they are not themselves limited to their own opinion from S1 when they look at it in S2. Now, in spectre 2, their opinion may be different, since they know how they themselves arrived at an opinion.
They can also figure out that the reason they thought were the reason for their belief, is actually wrong. That they in fact believe something for another reason than they initially thought.
This will most likely be very common. We may think we choose 2 slices of bread because of 3 because of a host of rationalizations, but it may even be random or simply memory or a mix of the two. We may even think we’re liberal, and then the machine that records our brain shows that we are actually conservative. Or we can realize we are actually liberal because we heard a president say the word in our childhood. With total insight into how our brain processes the input, and what the input is, then we can at the very least avoid the thinking and doing that so many of us do wrong. Like the Monty Hall problem.
Our biggest problem with the idea that we don’t have free will is that we are in spectre 1 determined to do everything regardless of whether we want to do it or not. Or so we illogically assume. We often illogically assume that in a deterministic universe we are like prisoners in our own bodies, and then have to for example sit and play the piano when we hate playing the piano. We also illogically assume that our future is certain no matter what we do now. That whether we will climb mount Everest is already decided. That all our future mistakes are already unavoidable.
But this is wrong. For one, we always want to do what we do. Even if we don’t really feel like we want to take out the trash, we do it because we want do to it at the time when we actually do it. If we really did not want to do it, it would not have been done at all. If it takes a spouse to nag a lot before we take out the trash then we didn’t want to take out the trash until the spouse nagged sufficiently. The nagging is in effect another input that changes the processing result from “I don’t want to take out the trash” to “I want to take out the trash”. Though it may just be “I want to stop the nagging, therefore I want to take out the trash”.
Secondly, the future is not yet decided. Or, it could be, but how can you tell if your future is to climb mount Everest or not? When I have the choice of whether my future is A or B, I just assume its the better future I was meant for and then I proceed as though that’s what I am determined to do. If you walk into a store and there are two products of different quality but the same price, you don’t take the lower quality product. So why would anyone take the nihilistic view of “I don’t bother doing anything because I’ll get a raise, or I won’t, no matter what”? Would not working hard to improve ones chances be the better option? I think one input that is very common to decide what we do and what we think, is laziness. Laziness is the energy saving office, it is there from evolution to make us save calories and thus put on fat for times of little food. As a spectre 2, we can know that the reason we don’t bother take out the trash is not because of any reasoned argument, but because our processor wants to save energy whenever something does not lead to reproduction. So as an occasional spectre 2 I have asked myself whether laziness for the purpose of saving fat for the future, is a good reason not do something, and I have repeatedly come up with the answer “No! Calories spent mean I can eat another cake today!”. Its like if we imagine ourselves as cars, if fuel is plentiful, and we get less worn out by racing around than standing still, does it then make sense to ever stand still? Does it ever make sense to not take a drive? Does it ever make sense to say no to doing something in favor of doing nothing?
I also determine that the more calories I use the more I get out of life, since I do more work, more studying, more exercising, cook more good food, can watch more movies and do more thinking about stuff like for example free will. Even just taking out the trash gives me more benefit than simply sitting on my butt. Simply because calories are not worth anything if they’re not spent (they are actually of negative value if they’re not spent, fat is dangerous for ones health). When we know the reason we don’t want to do something is likely to be the innate wish to save calories, and calories are abundant, then we can hardly avoid concluding that “I can’t be bothered” or “I want to save calories” is the worst reason we can have. Its the reason why I find it rather easy to exercise. Deciding how to get politicians to fund rejuvenation biotechnology or what education I should get, that is difficult. But spending some calories lifting weights and punching a punching bag, that is easy riddance of calories which mean I can have a steak for dinner.

Spectre level 2’s can see how they under certain circumstances have no choice but think Z, and that under certain other circumstances they have no choice but think something else. They can focus on the more pressing matters of analysing how they had no choice, so they can improve the processor and inputs. They can for example decide to change the brain’s processing unit in such a way that it no longer wants candy or tobacco or to rob a bank under this and that circumstance. Because someone who robs a bank is forced by the laws of physics inside his brain and the circumstances around him today and in the past. We can’t let robbers and murderers roam the earth as long as they’re dangerous, so don’t conclude the illogical idea that it would be what we would do if someone no longer is morally responsible.
Spectre 2’s would also not be so attached to what we did and thought and believed in the past seconds now that they are looking at the recording of their brain. If they know the reason they believed something that made them be an arse towards someone else then they can change that reason if they don’t want to behave like that. If they find out that they are depressed because they can change it so that they are no longer forced to be depressed under those circumstances. If they find out that they are destined to not bother to walk across the north pole on skis then they can change their processor so that they are no longer so lazy.
Spectre 2’s can in essence decide to be destined for good things and greatness instead of destined for whatever bad we are destined for with our flawed brains.
Spectre 3’s are important in that they will analyse how we decide what to improve in spectre 2. In spectre 2 we might decide that we want to change input a gate from an NAND gate to an AND gate so that we are no longer depressed in a certain circumstance. But when we then record the processing that leads to this conclusion, and analyse that processing, we can see that we are destined to want to change that NAND gate into an AND gate under these exact circumstances. But if we record that conclusion then we can move into spectre 4 and conclude that we are always destined to conclude that what we concluded in spectre 3. Or we can conclude something else, and move into a higher spectre level again. So most likely we will be in something like spectre 1000 before we change something we discover in spectre 2. We must make sure that we have investigated how we arrived at wanting the change we think up in spectre 2. Or we might change something and much later discover that we were in fact destined to change that thing, making something worse.
Imagine if we could change our processor so that no matter what it processes we never want to kill.

Even if we enjoy the luxury of going up the spectre levels for trillions of years, we will still be deterministic in what we do and how we choose what we want to do. But what we do and what we want to do will have gone through a whole lot of scrutiny and energy before we actually do it or decide that we want it. As stated earlier, energy is something we have in abundance so spending lots and lots of energy thinking about something isn’t stupid. Being lazy and not even bothering to think because it requires too much calories, THAT is stupid. I may be biased though. As I have concluded that the amount of energy we spend on something is directly proportional to the amount of intelligence we have put into it. If we use ten calories deciding whether we are liberal or conservative, its not exactly something we have weighed heavily. If we spend all the energy in the universe deciding what the meaning of life is, then whatever we conclude must be the closest to the truth that we can get within our universe. Maybe we could get closer to the truth if we travel to another universe, or maybe someone in a bigger universe with more energy in it will get closer to the truth than we ever can. In any case, the amount of energy we spend on something seems a very accurate way to measure skill or how close something is to the truth. As two examples:
1. Every extra second you spend writing Pi makes you come closer to the true number Pi, but you can never come infinitely close to it, because its infinitely long.
2. Athletes seem to be good at their sport in very close proportion to how much time they have spent practicing. Chess players that are good are the ones that have played a lot (Malcolm Gladwell made famous the idea of 10 000 hours as a rule of thumb for when we get world-class good at something). Same goes with most other skills, mathematics, modeling, painting, composing music, thinking about free will, even (I have done extensive thinking on it, more than some scientists that have done experiments about free will actually, since their time has been taken up mostly by thinking about how to perform the experiments themselves). If I did something I have not done a lot, I would be stupid at it, while I would be somewhat intelligent when it comes to something I have spent a lot of energy doing. I think this is the only definition for intelligence that makes sense. It happens to also explain stupidity, which others haven’t, yet. Stupidity by this definition is to spend little time on a decision or idea or opinion or action, etc. More intelligent would be to spend more energy, absolute intelligence possible would be to spend all the energy in the universe on it. Practicality mean we can’t consume the energy of the entire universe every time we want to decide which presidential candidate to vote for, but practicality also mean we can spend a little bit more energy than the Americans seem to be doing. The Americans could spend some extra calories sitting on their butts thinking about that, maybe they would lose some weight.

But don’t misunderstand me when I say I have spent an above-average amount of time thinking about free will. I can’t claim I am brilliant because I author an opinion or action or thought I consider good, without free will. Without free will it is reasonable to conclude someone else in the same situation with the same life experiences and the same inherited traits would have the same thought, action and opinion at that time.
I therefore really don’t like comments like this, which essentially are character-arguments:
“You have not done enough to have an opinion about this that I respect (paraphrased)”.
Any response I make to this would inevitably be something along the lines of bragging about my actions and thoughts and opinions. Any convincing argument I make, will simply be something I think of as bragging in a free will universe. I don’t believe free will exists (I still feel like I have free will), but others do believe free will exist. And they then believe I am saying “Oh look at me, I’m great, because I authored all this behavior that is good behavior”. They by the way don’t realize fully that their comment practically means I have to respond this way logically.
In reality, to me, I’m saying “Whether this should increase my social status or not is besides the point, I am saying this because you were determined to make an inaccurate comment about my accomplishments in regards to this topic because you were determined to not be able to attack the argument itself under these conditions”.

I quote some unnamed source:
A parable for people who do not understand the questions and so they feel they have all of the answers.

A self appointed expert on Buddhism went to the mountains of Japan to meet with a famous Buddhist master since he had so many questions. Once introduced the Buddhist master asked the “expert” if he wanted tea. As the tea was being served the visiting “expert” waxed profoundly on Buddhism while the master kept poring tea until it overflowed the cup. The “expert” exclaimed master there is too much tea in the cup it cannot take anymore. The master replied “see just like the cup you must be empty to accept my tea”.

My version of it is:

A parable

A self appointed expert on Buddhism went to the mountains of Japan to meet with a famous Buddhist master since he had so many questions. Once introduced the Buddhist master asked the “expert” if he wanted tea. As the tea was being served the visiting “expert” said “but I have not been given a cup”. The master replied “You need to find or create a cup that can hold the tea before you can accept my tea”.

The response was:
Excellent parable, but from what I have seen of you so far, you have not demonstrated that you practice it.
End quote.

My response was:
I am not even sure human brains are able to perform such a task, let alone being able to define how one does it. Its a parable that to me refers to the difficulties of Epistemology.
End quote.

What do I mean by this? It is more easily seen if we change the characters:

A 10 year old self appointed expert on quantum physics went to the mountains of California to meet with a famous quantum physics professor since he had so many questions. Once introduced the professor asked the “expert” if he wanted tea. As the tea was being served the visiting “expert” said “but I have not been given a cup”. The professor replied “You need to find or create a cup that can hold the tea before you can accept my tea”.

Now it is easier, but still quite dependent on chance whether you understand what I’m on about.
I am not sure that it is possible to know things (Epistemology deals with that), but more importantly I am not sure we can convey such knowledge from brain to brain, and I am not sure a brain can even contain knowledge if we were able to produce a machine that produces “Epistemology-proof” knowledge.
Knowledge is sometimes argued as a belief that is true. Which can be showed as a Euler-diagram like this:

We can assume that the universe is true in whatever configuration it actually is in at any moment in time. The brain then has to be able to contain some of that truth. The problem is, how can a brain contain truth? For example if we use the entire Earth as the truth we must contain. The “Earth truth” is then a huge number of atoms in a particular configuration. Our brain can’t make a perfect copy of it, it can only make a tiny inaccurate model that represents the Earth truth. Like a map of sorts. But instead of ink on paper it is activity in a brain which then maybe forms a neurological pattern for future activation (memory). How can this scaled down inaccurate model be the Earth truth? We may use machines to record the brain representation of Earth truth, and determine the degree of detail and accuracy, but however we argue it will never be the entire truth in its exact configuration, hence it is not really true. Just almost true.
But can a brain contain perfect representations of smaller truths? I have yet to consider this thoroughly. Check back later for an update to this.

In the mean time, I have thought about the problem of quantum measurements. This problem may just easily destroy the entire concept of spectre levels as a useful thing to do.
To explain I will have to bring back the three aspects of determinism from earlier:
1. Position.
2. Time.
3. Rules.
The problem is that the position has to be changed if it is to be measured. Specifically it is such that if you are to measure the position of a particle, you must bounce a particle off it. Its essentially like using a billiard-ball to determine the position of another billiard-ball. The reason you can’t simply see the tiny particles like you can see a billiard-ball, is because the particles don’t release light unless they absorb light or heat first, which means they don’t say where they are unless they are bumped out of their position. Essentially.
A game I would love to make, is one where the billiard-balls only light up for a very short flash every time they are hit. Then the game is played in a dark environment. This would give an intuitive understanding of this position problem.
Light is a particle and literally bumps anything it touches in a certain way, so if we want to see something, it has to be bumped. To see the moon, we must bump it with light, which then reflect into our measuring device (Camera). The Sun is already doing this, which is why you can see the moon at night (and also during the day sometimes).
On a barely related note, based on what happens over time (2) because of the rules (3), we have figured out that we have to have lots of unseen position information (1), or things don’t make sense in the universe. This is the reason why we think the universe has to have 25 times more stuff in it than we can see (see dark energy (the amount of energy needed to make our universe expand quicker and quicker) and dark matter (the amount of energy needed to make our galaxies spin as quick as they do, while also keeping the stars from flying away into space).
So what does this do to our spectre levels? When we measure our brain-activity, we affect the particles in our brain, and we are guaranteed to affect our decisions to an as of yet unknown degree (someone would have to calculate the effect of the machine we have not yet made to record the brain in fine enough detail).
The effect the recording machine will have on the brain will be proportional to the amount of detail we get. So if we have total detail then we can pretty much be certain that our brain activity was affected by the machine itself recording our brain. Hence, if we decide between option A or B, the resulting choice can be the result of the machine. When we analyze the recorded information in spectre 2 we will see lots of examples of how the machine has affected our brain processes. Where a neuron can be just under the threshold for firing without the machine, the machine can push the neuron over that threshold. And where the neuron is over the threshold without the machine, the machine can push the neuron under the threshold.
I’m still working out how this works. More to follow.
(still updating)

Is it right to produce offspring with finite lives?

We currently have the theoretical know-how and technology to extend life drastically (like 50% longer lives in genetically above-average cases), and can within a century conceivably eliminate about 95% of the global causes of death. Is it then ethical to produce offspring before this practically infinite lifespan becomes the norm? The ethical thing for the species is that the species survive, you’d think, but no dinosaur suffers because the dinosaurs are not producing offspring anymore. The individual dinosaurs did however suffer when they all died of one cause or another. Thus, it is unethical to allow the individuals to die, even if it means we will not produce new offspring to continue the species, because individuals that don’t exist yet, can not suffer from not existing.

PS: I could have made this as long as complex as a peer-reviewed paper, but it frankly does not need that many words.
PPS: This argument does not take into account anything else than the suffering of dieing. It can be argued that it is ethical to produce offspring if that offspring experiences more good than bad, but the definition of what good and bad is and all that would require many words and is not relevant at this time, to the argument I am trying to make.