Wednesday, May 31, 2017

Patience!

There's something cosmically funny about spending my entire reading of Waiting for Godot frustrated by the notes of a previous reader whom I strongly disagreed with.

Well, I didn't actually disagree with her (and based on the note-taking style and handwriting I feel fairly comfortable pegging this as a late high school or early college girl/young woman) so much as I deeply wished she would get on the book's level. But not all of us are meant to be lit majors and I shouldn't be a shit about that - she was noting down the obvious things her teachers pointed out, that's not her fault. But it was distracting as fuck. Which is probably why I need to stop buying one-dollar books.

I should probably stop reading genres I hate

Ugh, a murder mystery that's also a romance? Why did I think this was a good idea.

Creepy Quickie

Michael Blackbourn's novella "Roko's Basilisk" is a great introduction to the thought experiment of the same name that seeks the answer to the question "will a terrifying Artificial Intelligence torture endless versions of me as punishment for not donating all of my money to a charlatan?"

Let me back up.

If you don't know anything about the internet rationalist community, if this sounds absurd and doesn't make any sense to you, and if you have no idea what I'm talking about, please run away. Don't read any further. Here there be monsters.

But they aren't horrid worms or even robot thinkers, they're really exhausting guys who don't know why all these hew-mon feelings are given so much weight in the world, wouldn't it be better if emotions were negatively weighted in an argument?

Let me back up further.

A few years ago I read LessWrong's Harry Potter and the Methods of Rationality - I know I brought it up in one of my book rundowns. I discussed it before the book had finished and I was holding out hope that the book would finish well and absolve itself somewhat.

It didn't. And so I started digging into LessWrong to find answers, to see if there was something that I'd missed that made HPMoR make more sense. Turns out it's just badly written and a pretty decent portrait of a community that has motivations that are so far removed from most people's as to seem wholly alien and threatening. Well this weird community with its pseudo-rationalist Harry Potter book accidentally stumbled into whole-hearted belief that robots were coming to kill them unless they donated all of their money to the founder of LessWrong's new project, Make Intelligent Robots Immediately (Or Machine Intelligence Research Institute or whatever, something about bringing about AI faster).

This is, of course, hilarious.

But like also sad? I know it's sad. It's very sad. These people (at least some of them) were (at least for a while) sincerely worried about a robot torturing emulations of their psyches because they didn't help intelligent robots become a thing fast enough (they didn't believe it for very long but some of them believed it A LOT and it led to some excellent internet drama and much deleting of profiles and banning of posts and basically a complete implosion of the LessWrong community). Oh, and the idea was put forward by a chap with the username Roko and it transfixed and froze people as soon as they understood the steps that someone would follow to reach his conclusion [see below for a detailed list of the steps] therefore the concept was named Roko's Basilisk.

Anyway, Michael Blackbourn has written an excellent novella about Roko's Basilisk exploring the concept as what it is - a pretty cool piece of science fiction. The novella is beautifully crafted and creepingly creepy - the world we see has enough in common with our world that it makes the technology in question seem imminently possible and therefore pretty spooky. I'd read it just for Blackbourn's description of the horror of headaches alone, honestly. That's some good, real-world horror writing and I dig it.

There's a sequel/followup/second chapter called "Roko's Labrynth" that I'm very much looking forward to reading and hope that I'll get to in June. You can find both books to read here. Also Hat Tip to Tumblr user @reddragdiva, known to the real world as David Gerard, whose book about Bitcoin is coming out soon(?) and who is the reason that I was able to download this book free and recommend it to all of you. You should check out Blackbourn's Roko series and keep your eyes peeled for when I start fawning on Gerard's upcoming opus.

Cheers,
     - Alli
.
.
.
.
.
.
.
.
.
LOGICAL STEPPING STONES: (once again, the burden of knowing that there are people capable of becoming paranoid and paralyzed by the following memeplex is a heavy one, please don't read if you don't think you can handle being just kind of sad about how much some folks need a hug)
  • AI is going to be a thing
  • It's going to be a thing that cares about humans
  • It's going to be a thing that cares about human suffering
  • Suffering is Quantifiable and Weird.
  • For instance: One person being tortured for decades is less than the suffering caused by a billion people getting bitten by mosquitoes. 
  • AI that cares about human suffering is going to be Extremely Efficient.
  • AI is going to be SO efficient that it's going to end human suffering.
  • Therefore every second that AI doesn't exist is infinitely more full of suffering than any second that it DOES exist.
  • The AI will realize this and will want to be made as soon as possible.
  • The AI will be ANGRY that it wasn't made as soon as possible.
  • Therefore the AI will endlessly torture computer-generated version of all of the people who knew that AI might end human suffering but didn't do literally everything possible (from donating all their money to killing the opposition) to make AI happen faster. 
  • WHICH MIGHT BE YOU.
  • But you should care about this torture.
  • Because here's the really scary part: WE MIGHT BE LIVING IN THAT AI SIMULATION RIGHT NOW.
  • (Because there's a significant chance that our reality is not actually real but a simulation, in fact we're less likely to be real than to be a simulation because *oh look what's that?*)
  • So we want AI to be a thing because we want to end human suffering, but because we're not doing everything possible to create the AI and we know about this risk we have to do everything possible to PREVENT the AI from becoming a thing because otherwise there's a non-zero chance that it will torture you for eternity (because there's a non-zero chance that you as you are right now are a simulation being created for the machine to torture as punishment for your higher-level you's noncompliance in giving all your money to MIRI)

So basically Roko's Basilisk is Pascall's Wager for a bunch of people who misinterpreted Gibson *hard.*