[ad_1]
From the Olympics to biohacking, we have always been driven to push past our limits. With AI and the metaverse promising to change our engagement with reality itself, what does it really mean to be human?
A famous thought experiment in 20th-century philosophy is known as the ‘experience machine.’ Suppose you could plug yourself into a device and experience all the greatest pleasures you could ever imagine. For example, you could live the life of a Nobel prize-winner, a great artist who enjoys popular and critical admiration, a saintly rabbi, or whatever tickled your fancy. You could experience the highest of life’s highs and none of the lows with all the gain and none of the pain.
Or at least, you’d have the experience of that life. None of it would actually be happening, but, crucially, you wouldn’t be able to tell the difference.
If this all sounds familiar, you either managed to sit through at least some of your philosophy 101 classes and listened to American philosopher Robert Nozick’s ideas, or you’ve seen an obscure Hollywood film called The Matrix.
The experience machine was a way to argue that there’s more to life than pleasure. More precisely, it aims its sights on a moral principle known as ethical hedonism. Hedonism, in this sense, is more substantial than supersizing your drink and fries. It’s the idea that pleasure (and the avoidance of pain) are fundamentally at the heart of what’s good or bad.
The experience machine was a way to argue that there’s more to life than pleasure.
That may sound too shallow to be a genuine manifesto for living. But if you stop and think about it, the principle has some intuitive force. After all, when we’re faced with a difficult decision and want to do the right thing, we often try to work out what will make the most people happiest and cause the least pain. Indeed, that’s how a lot of acceptable public policy works too.
So why does the ‘experience machine’ rub many the wrong way? It’s certainly not that pleasure is something to be avoided. And we can’t really say those pleasures aren’t real. They’re real enough to us, and if we’re talking about the subjective experience of pleasure, what difference does it make if they’re really real?
So maybe the problem with subjective pleasure isn’t the pleasure so much as the subjectivity. In the ‘experience machine,’ we don’t do anything so much as experience many things. That is, we don’t owe anyone anything, no one owes us anything, and nothing we do (or seem to do) has any effect on anyone else. We are, in some fundamental sense, alone. And that seems important.
After all, any sufficiently complex animal can experience pleasure or pain. Only humans can build complex ethical networks of trust, reciprocity, love, respect, jealousy, and betrayal.
It’s hard to sustain the case that ‘fake’ pleasures aren’t actually pleasurable. But simulated ethical responsibility? That’s not being human; it’s watching a movie about the human condition.
Algorithm blues
In one sense, the experience machine transcends the limits of our finite human form. We can do anything, be anything, and not be limited by our physical circumstances. But in a more profound sense, the experience machine seriously limits our human powers. Really, it eliminates them. We don’t actually do anything, and we are not, in any true sense, human.
But what happens when technology helps us exceed our physical limits to do more in the world – and to each other?
The notion that technology extends the scope of our ethical responsibility is hardly new. Military leaders can launch missiles that cause devastation on the other side of the globe. Cyberbullying can have catastrophic consequences, and almost anyone can do it from the comfort of their own home.
Unsplash.com, Gabriella Clare Marino
But now, we seem to be entering an even braver new world, where we don’t just arm ourselves with slick new tools and gruesome gadgets. Many of the masterminds of Silicon Valley or Tel Aviv aim for nothing less than transforming our very selves to overcome our human limits.
There’s biohacking, where we alter our physical bodies to unleash new capabilities. These initiatives range from ingesting substances that promise to enhance cognitive function to implanting microchips that can edit our genes.
Then there’s the promise of potentially living our lives in virtual reality, perhaps multiple virtual realities.
Living in a society that enjoys a good coffee, most can hardly object to substances that improve cognitive ability. But everyone should know that these new technologies present a thicket of moral problems. Should parents be able to pick and choose the genetic makeup of their children? Do virtual societies deserve the rights and protections of physical-world communities?
These will be urgent questions sooner than we might realize. But let’s not miss the ethical forest for the technological trees. I want to step back and suggest a framework for thinking about what we’re up to when we alter our bodies and minds: When does our technology make us less human rather than more human?
AI and advances in micro-processing power don’t just promise to help us reach our goals more efficiently. As we automate more of our decision-making and outsource our hard ethical choices to the algorithm, we risk becoming spectators in a life we’re no longer really in control of.
As we automate more of our decision-making and outsource our hard ethical choices to the algorithm, we risk becoming spectators in a life we’re no longer really in control of.
Put another way, one reason to keep the question ‘what do we owe each other’ in mind is, well, to try to answer it. But there’s another more subtle reason: once that question stops making sense, then perhaps we have lost what it means to be human. At that point, we become more Agent Smith than moral agents, prisoners of our techno-utopia.
We’re very fixated on issues such as whether it’s permissible to resplice our progeny’s genetic code to make them faster, stronger, smarter, and give them better comic timing. But while those are serious problems, we can’t lose sight of the fact that these attributes are just a means to achieve the ends we set for ourselves. A person with enhanced biomechanical limbs is no less a person, even if they are boundlessly more physically powerful.
The truly fundamental question that goes to the essence of being human then is what the ends we set for ourselves actually are and should we continue to set these ends at all. What if the truth is that with great responsibility comes great power? After all, what strength of will does an elephant have? Or a space rocket? No matter how powerful, these are hardly entities bearing a person’s dignity.
And once we hand over our responsibility to the algorithm…well, who can say for sure?
This is not meant as techno-pessimism. We can’t predict precisely what the future holds, not least what incredible opportunities technology will bring in the physical world as we explore virtual experiences. But we need to protect our fragile humanity to define our goals and realize values online and offline in the metaverse and in our most mundane domestic situations.
Featured image: Unsplash.com, ThisisEngineering RAEng
The post Is There a Moral Code in a Virtual World? appeared first on aish.com.
[ad_2]
Aish.com is an online Jewish Newspaper. Aish is a news partners of Wyoming News.