The Other Kind of Life – City of Blind Cameras

By Shamus Posted Friday Jan 18, 2019

Filed under: Projects 75 comments

Last week I posted the opening chapter of my novel, and promised to post another section this week. So that’s what I’m doing now. This next section appears ninety-ish pages into the book and is obviously a little spoiler-ish. I chose this section because it’s a pretty good vertical slice. We get to see some villains, we get a little action, and we get lots and lots of our leads talking about how robots work. That’s pretty much the book in a nutshell. If this passage doesn’t do it for you then this novel is probably not your thing.

If it does turn out to be your thing, then you can get the book in paperback or get the kindle version.

City of Blind Cameras

Jen and Max are walking along a row of street-level shops downtown when she says, “Dr. Kvenst and I both feel that the murders are sabotage. But I’m curious how you came to that conclusion. Did she convince you, or did you come up with it on your own?”

Max is walking with his head down and his hands stuffed in the pockets of his raincoat. He answers without looking up. “The exploding faces looks too much like a cover-up to me. Also, the murders just happened to occur in such a way as to generate maximum panic. They hit a couple of innocent people – a woman and an old man – in broad daylight, in front of public security cameras. The reaction would have been different if it had happened in an industrial setting.”

They’re walking on one of the streets located beneath the promenade. The overhead walkway blots out the sky, making the street feel like a tunnel. Narrow shafts of sunlight and rainwater flow in through the drainage grates above. It’s late morning, and the street is mostly empty. Vendors keep harassing them with salesman patter as they pass. Max is used to being ignored by these guys, but Jen is white so they assume she’s a potential customer.

“The thing we found suspicious is that the murders have all happened here in Rivergate. This city has about 5% of the world robot population, but 100% of the robot murders.” Jen walks through a column of falling water as she says this. It looks strange to Max because you’d expect a human to walk around it. Even if they didn’t, they would probably make a sound or skip a breath when they found themselves suddenly doused with cold water. Barring that, they would probably sputter for a second when the water washed over their face.

“I read an article yesterday,” Max says. “I see that the company has started blaming the attacks on the robots’ age. Apparently most of the robots in this city are obsolete.”

“While that’s true, that explanation didn’t come from Dr. Kvenst or anyone on her team. All she could tell them was ‘I don’t know’ and ‘this is impossible’, so they handed over the job of explaining things to public relations. It’s one of the reasons she started pressing for us to find whoever hacked the casino robots. The company has stopped talking to her and she’s worried they’re planning to pin the blame on her department and fire her. This country is mostly using generation two units, which they stopped selling out east about five years ago. But even if we take that into account, this country only has about 20% of the population of gen 2 units in the world.”

Max steers her around another shaft of light and water. Apparently neither of them cares if she gets wet, but when the water lands on her it splashes him in the face. Once they’ve passed he says, “So when we add it all up, it’s just unlikely enough to be suspicious but not unlikely enough to prove anything. That’s really annoying.”

“You can see why Dr. Kvenst was frustrated.”

“Does anyone else make robots? The only ones I ever see are from G-Kinetics.”

“There are a lot of little companies and university projects out there, and maybe there are government projects we don’t know about, but in terms of consumer-grade robots there is only the big three: G-Kinetics, Senma Technology, and Yendu Industrial. Yendu is the odd one. They’re working on machine intelligence, but they don’t really make thinking bipeds. They’re working on stuff like thinking factories and power plants. Projects for rich countries. I doubt they’re going to do business here anytime soon. Senma makes humanoids like me. Their machine intelligence is years behind us, but according to the real humans I’ve talked to their bodies look amazing.”

Max stops and looks Jen up and down. “Really? How much better can they be? You look almost real to me.”

“This body has rigid muscles, like an animated mannequin. The muscles don’t bulge or contract. It’s why this body is so thin. The more muscle mass you have, the more noticeable the flaw is. My mouth doesn’t animate perfectly with my audio, and the inside of the mouth doesn’t look quite right. People say it looks ‘dry’. The other problem is with the rigid insides. Poke my belly.”

Max does. It’s like poking a solid hunk of metal with a thin layer of rubber stretched over it. “Ugh. That’s weird.”

“That’s my battery. I’m more than double the weight of an equivalent human my size, and the weight is distributed differently. It’s not a big deal when I’m just walking and talking, but it looks strange if you do anything active. The skin looks good in low light, but in bright sunlight I look fake. Some people find the effect off-putting.”

“And this other company doesn’t have these problems?”

“Their humanoids look very close to the real thing. I can’t tell the difference, although I’m told most people can. Senma Technology is a very artist-driven company. Aesthetics rule, and the engineers have to make do with the constraints they’re given. This means their products are physically feeble, they have a battery life of just a few hours, and they’re not particularly useful for anything besides walking and talking. Their machine sapience program hit a dead-end about eight years ago. Their machines were too simple to hold up an interesting conversation. This is bad, because they looked so lifelike.”

“So they looked even more like a human than you do, but they weren’t anywhere near as smart.” Max could see how the former would exacerbate the latter. Max never expected much from the primitive-looking laborer bots that he saw around the city, but he’s acutely aware every time Jen commits a breach of human norms.

“And on top of everything else, their robots cost a fortune. Eventually they gave up and started buying brains from us. They get our cast-offs. Right now we’re selling gen 4 models, but Senma is still using gen 3 brains.”

“Huh. That kinda kills my theory. I was hoping there was a rival that was trying to discredit you so they could steal your business. But one company buys its brains from you and the other is operating in a totally different market. Neither one has a good motivation for this.”

“I don’t suppose you have criminals you can contact? Kvenst was hoping you would ask around. That’s the entire reason she got Landro to hire you in the first place.”

“I think she’s vastly overestimating how effective ‘asking around’ can be. This is a city of millions, there are a lot of factions, and not a lot of communication between them. Normally I’d start by working out what skill sets were involved, but without knowing how this job was done we have no way of knowing what sorts of people might have been working on it.”

“That’s disappointing.”

They emerge from the shadow of the promenade into the pouring rain. An aerial police drone drifts down, and like an idiot he looks up at it. He looks down quickly and tilts his hat forward, but the damage is done. It’s not that he thinks anyone is after him. It’s just that for him staying low-profile and invisible is an act of basic professionalism. You never know when the cops might want to know where you are, so it’s best to behave as if they always want to know where you are.

“This is it,” she says.

Max looks around the intersection of footpaths as if he expects to see the the victim and the perpetrator still lying on the wet stone, but of course they’re long gone. The emergency apparatus of the city is swollen and dysfunctional, but the tourism apparatus is a tireless machine and having dead bodies in the street is bad for business. He can’t even see where the murder happened. If there was any lingering forensic evidence, the rain has wiped it away.

Max pulls out his new projection glasses and puts them on. He looks over to Jen, expecting her to make a crack about how absurd this looks, which is part of the universally understood human ritual of putting on unusual hats and glasses. But then he remembers she’s a robot and probably doesn’t notice that sort of thing.

He turns them on, which is pointless since they don’t have anything to project yet. He pulls out his handheld and opens the security footage. It shows the intersection where the two of them are currently standing. The timestamp in the corner shows this video is a bit less than 24 hours old. It was raining then too, which means the footage is garbage. Still, he can see a man and a woman standing together under an umbrella, looking at something just out of view. Annoyingly, they’re in the corner of the image. The woman is almost out of the frame entirely and he can’t see what they’re looking at.

Max orients himself and faces the security camera that captured this footage. It’s not hard. The cameras are designed to blend into the scenery, but he’s spent a lot of his career ducking, sabotaging, spoofing, and manipulating these things. His eyes are always looking for them, whether he’s currently interested in them or not. Once he’s got the camera lined up, he stands just where the victims were standing and turns to see what they were looking at. It’s a street-level screen, which is currently advertising a boat tour along the coast. It shows a boat cruising past the statue of Halona in front of a crimson sky. The image is reflected in the rippling water at his feet. In this context, it makes him feel like he’s standing in luminescent blood.

Once he’s reconciled the scenery and the video in his mind, he thumbs the play button. The couple turns as a robot enters the frame. It reaches out and crushes the man’s head, and then its face explodes outwards. The woman recoils and falls out of frame.

“What happened to the woman?” Max asks without turning to face Jen.

“She’s got some burns on her arm where she was hit with the battery gel and maybe a bit of shrapnel, but she’s otherwise fine. Nothing life-threatening. She was taken to the hospital, but I don’t know if she’s been released yet.”

“Do the police really just let you make copies of all of the evidence like this?”

“Apparently so. I don’t know who our contact is or what sort of paperwork we have to file.”

“Must be nice to be a global corporation. When I want stuff like this I’ve got to bribe a cop, do a burglary, and hack a computer.”

He sends the projection data to the glasses and two bodies appear on the ground in front of him. The man is thankfully face-down. Max doesn’t know if he could handle looking at this damage from the front. There’s a stream of watered-down blood flowing from his head to the gutter. The robot is on its back, the now-open face looking towards the grey sky.

“I don’t know these markings. What kind of robot is this?”

“It’s a generation 2, just like the others.”

“No, I mean what was its job?”

“The police report said it was a janitorial unit.”

Max turns around in a circle and spots a projected broom and dustbin lying nearby. He walks over and, like a total idiot, tries to pick up the broom. His hand slaps against the wet ground. Embarrassed, he looks back to see if Jen noticed. She’s watching him, but she doesn’t say anything. The glasses are projecting a red outline around her to let him know she isn’t part of the captured scene.

“The robot dropped its tools right here. Do you know how long after the murder this image was taken?”

“No.”

He looks around again, flipping the glasses on and off to see if he’s missing any other details. The only discrepancies are Jen, the bodies, and the tools.

“The robot was still holding its broom. Presumably it was still doing normal janitor-type stuff. But then it dropped the broom, walked over here, and killed the guy.” Max is narrating the action as he does it himself. He ends by clapping his hands together at eye level, repeating the execution move the robot performed. “So its behavior changed. It stopped sweeping – or at least, it stopped holding its broom – and performed a murder. Perhaps something triggered it. Maybe somebody did something to it? Maybe it saw something on the billboard?”

He goes quiet. He realizes he’s just verbally flailing around and he has no idea where to go from here.

“Would it help if we got footage from other security cameras in the area?” Jen asks.

Max shrugs. “Possibly. But cameras are only installed at intersections. And not all of them work. Sometimes they’re vandalized on purpose, and the city doesn’t fix them in a hurry. Anyone who knows how the city works ought to be able to move around without being seen. With a little scouting, I could walk from here to the hotel without ever showing up on a camera. Basically, they spy on tourists without doing anything about criminal activity. Since the murders are all happening in this city, I’m thinking our suspect would be able to conceal themselves from the cameras.”

“I can request the footage anyway. I can look through it.”

Max can’t tell if this is a proposal or an announcement. It doesn’t matter. She’s not his robot. She can do whatever she wants.

He can hear another drone passing overhead. He manages to keep his face down this time. This is largely pointless since he’s been standing in front of a city camera for five minutes, but he’s glad his instincts seem to be recovering.

Max stares at the empty spot on the pavement where the man died. “So where were the other killings?”

“The first happened on Stoneway, just north of the West Bridge. The second took place right outside the Grandview.”

“Let’s look at the Grandview first. It’s only a half mile from here.”

“Public street navigation says it’s a mile.”

“Public streetnav is maintained by the city planners. It’s designed to keep you near the shops and away from traffic arteries. If we go up Third Street we can cut across the loop that streetnav is suggesting.”

They head north. Being a native, Max knows that this path will eventually become Third Street, but it’s not called that now because it’s just a walkway and thus follows different naming conventions. This is one of the many rules that makes sense to locals but causes endless confusion for visitors. Since the confusion tends to get people lost near the shops, nobody is interested in making any changes.

They’re under the promenade again, so the sky has been replaced with a concrete ceiling. This area is filled with arcades, smoke dens, bars, and live music. Those businesses don’t usually open until noon, which means the street is dark and the storefronts are shuttered.

Max spots a lone policeman inspecting a storefront ahead of them. He’s checking the gates and making sure everything is locked up properly. This seems like it’s a nice gesture, but Max is guessing he’s just looking for an excuse to issue a citation.

The cop turns and Max realizes this is Sando. He’s the oldest and quietest of the Three Pigs, and Max has always assumed he’s the brains of the trio. Max knows exactly what’s going to happen next, even before he hears the heavy footsteps coming from behind.

A slap to one side of his head knocks his hat off. He turns to recover it and ends up lifted and slammed into a wall. He finds himself looking into pair of fierce young eyes.

Max coughs. “Good morning Officer Veers.”

Dixon stands beside Jen Five and rests his hand on his service pistol. All three of them are wearing the dusty red uniforms of the city police, which has segments of black body armor strapped to it. On Veers these plates accentuate his already considerable physique and make him look immense. Dixon is more fat than muscle, so his armor panels have expanded away from each other over the years. He keeps the straps tight like a girdle, which makes his breathing shallow and labored.

Sando draws close now that Veers and Dixon have the civilians under control. “Mr. Law. I heard you were out of prison. I hope you’ve reformed. I’d hate to see you fall into recidivism.”

“Nope. I’m reformed. Gonna open a donut shop.”

Veers slams his fist into Max’s sternum.

Sando shoves his hand into Max’s pocket and pulls out a wad of cash. “Reformed? You expect us to believe an unemployed smuggler can afford a room at the Seaside? Looks to me like you’re stealing from our guests again.” He gives the money to Veers, who pockets it.

Max tries to give a nonchalant shrug, which is hard to do properly with so much weight pushing him into the wall. “The room doesn’t even have a view.”

Sando reaches inside of Max’s jacket and pulls out his gun. “Whose gun is this?”

Max replies, “This is a dangerous city. A guy’s gotta protect himself. I mean, there’s never a cop when you need one.”

Veers punches him in the face. Max tries to roll with it, but it’s pretty much impossible when you’re pinned.

Sando continues, “Are you telling me this is your gun? Because if this is your gun then I have to take you back to prison. So whose gun is it?”

“I guess it’s your gun,” Max mumbles.

Sando nods. “So where’s all the money coming from, Mr. Law? A little something you had hidden away before you went to prison?”

Max tries to shake his head. “That’s impossible. There’s no way I could outsmart someone as clever as you.” Surprisingly, Veers doesn’t hit him for this.

Sando is still going through his pockets. He pulls out a pack of smokes. He starts to say something, but then he sees Max’s brand. “What is this shit? You get out of prison and somehow you’re an even bigger degenerate than when you went in?” He crumples up the pack and throws it over his shoulder.

“What do you want, Officer Sando?”

“I want you to tell us where this money is coming from. Hand it over as evidence, and we’ll leave you alone.”

Dixon’s earpiece sputters out some muffled chatter. Confused, he says, “Sand? Dispatch just said they got a call about three guys pretending to be cops, assaulting a civilian.”

Sando looks down the street in either direction. “A tourist or something? I don’t see anyone.” He looks at Jen. “Did she call someone?”

Dixon replies, “I’ve been watching her the whole time. She’s just staring at me like a dumbass.”

Sando turns back to Max. “You’re a clever one, Max. I know you’ve got some money hidden away. It’s stolen money, and as officers of the city it’s our job to track it down. We’re gonna find it eventually, so how about you tell us now and save everyone some time?”

Max keeps his mouth shut. Veers is just looking for an excuse to hit him again.

Sando goes back into his pockets and pulls out his handheld. “Nice. Looks like the bank just issued this one. Got your identity moved over and everything.” He hands the unit to Dixon, who throws it on the ground and stomps on it.

Sando looks down at the plastic wreckage and frowns. “Oops. Now you’re cut off from your bank account until they issue you a new one. If you’ve got some money hidden away, now would be a good time to go get it.”

Max still doesn’t see any reason to talk, but Sando gives him a nice long opening in case he decides to do so. When it’s clear Max isn’t going to produce a suitcase full of cash right there, Sando loses interest. “Okay. Think about it Max. Stay out of trouble.” He takes a step back and suddenly everyone relaxes.

Veers smiles. “Let’s go find those fake cops before they hurt anyone else.”

Dixon kicks the crumpled-up cigarettes towards Max. “Pick up your litter if you don’t want a ticket.”

Max slumps down against the wall and tries to catch his breath. He prods his face a bit, assessing the damage.

Jen hasn’t moved since the altercation began. She’s still wearing her sunglasses, standing with her arms at her sides with her usual neutral expression.

“I’m guessing you called the cops? What, you’ve got a phone in your head?”

“It’s a link to the research office. It was installed as an anti-theft measure. I broadcasted the situation. I guess someone at G-Kinetics saw the message and called the police.” She looks down the street where the three men went. “Do you think the police will catch them?”

Max groans and pushes the heel of his hand into his forehead. “Those were the police. The real, actual police. So no, nobody is going to ‘catch’ those guys.”

“They broke six laws in a two-minute conversation.”

“How can you know so much and so little about this country at the same time?”

“Everything I know about this country comes from Kasaranian media. I knew bribes were common. I didn’t realize that police muggings were common.”

“I don’t know if I’d call that a mugging. And it’s not actually that common. Those three aren’t your typical crooked cops. But yeah, law enforcement in this city is a little more complicated than what you’ve been told.” He pushes against the wall to get himself upright. He’s surprised at how dizzy he is. “Damnit. My bruises had just healed.”

He limps a couple of blocks to get out from under the promenade so the rain can wash the blood off his face. Jen Five follows him in silence.

He fishes around inside the ruined package of cigarettes to find the most structurally sound of the bunch. He breaks off some of the crumbly bits and sticks what’s left in the right side of his mouth, as far from the damage as possible. He manages to get it lit without the rain dousing it and without him needing to make any whimpering noises. “So I thought you said that you hated seeing harm come to humans. You said it was unbearable.”

“Calling it ‘unbearable’ was obviously hyperbole, but yes.”

“But you stood by and watched it happen. Don’t get me wrong, I don’t expect you to lay down your life for me or whatever. I just can’t figure how you stood there and watched if you hate it so much.”

“That was a very negative outcome for me, yes. It was probably the worst situation I’ve ever been in.” She says this in her unwavering dispassionate voice. It’s not that she speaks in monotone. It’s just that she sounds like she’s not particularly invested in anything that’s happening.

“Wasn’t all that great for me either,” he says.

“But remember that I hate hurting people even more than I hate seeing them hurt by others. There was no way to protect you from those men without entering into physical conflict with them.”

“You’re saying you’d rather let an aggressor win than defend an innocent party? That sounds like a really shitty design.”

“There’s a thought experiment we use to talk about this kind of stuff. It’s called the train problem.”

“Is this the one where there’s three people on a set of tracks and a train is about to run them over, but you can flip a switch and send the train to an alternate track where it will only kill one?” Max sighs. This sort of stuff always struck him as pointless wanking.

“That’s the one, although the numbers of people vary depending on what you’re trying to prove. Anyway, the vast majority of humans will choose to throw the switch and sacrifice one person to save three.”

Max nods. “I guess I would too. Makes sense.” Max begins walking north. He moves with his head low and his hand held to his ribs. Jen walks beside him.

“But robots are designed to not intervene. Or at least, we’re extremely reluctant to do something that will hurt someone, even if the overall outcome seems favorable.”

Maybe it’s the beating his brain just took, but he doesn’t see how this thought experiment explains anything. “I don’t get it. Why?”

“Because you really don’t want robots deciding who should be sacrificed for the common good. The train experiment is useful because it’s really clear-cut, but the vast majority of decisions in the real world are a lot more muddled. In the real world we’ll disagree on specifics like how many people are on each track, what the odds are that each group might be able to get out of the way on their own, and how much time we have to make a decision. Maybe some people think that switching tracks at the last minute will derail the train and kill both groups. And so on. We can’t agree on the risks, the benefits, the available options, or what caused the problem to begin with.”

“Okay, but sometimes the problem is clear-cut.” Max looks over his shoulder, indicating he’s talking about the encounter they just had.

“Is it? There are almost seven million people in this city. There are millions of different opinions on how the world works, or how it ought to work. None of them think their opinions are stupid or wrong, even though many of them have to be. Everyone thinks their solutions to the world’s problems are obvious. Do you think because I’m a robot I’ll do any better? I’ll be acting on the same imperfect information everyone else has. Dump a million robots into this city and expose them to the same mix of opinions, misunderstandings, deceptions, and hyperbole, and inevitably some of those robots will come to the same wrong conclusions. Given how durable and strong I am, imagine the lengths people would go to in an effort to manipulate me. They would lie to me. They would trick me. They would wipe my memory if I couldn’t be persuaded. I’d be weaponized.”

She continues, “Here’s another thought experiment. One group of people says that switching the train to the left track will save three lives at the cost of one. But another group claims that the people on the right track can move out of the way in time, so switching tracks will kill one person for no benefit. The first group gets frustrated that they can’t persuade me, so they decide to turn me off and replace me with a robot that agrees with them.”

“This sounds like a very long argument. Didn’t the train already run everyone over by now?”

“It’s a thought experiment. You’re not supposed to worry about that sort of thing. Replace the train analogy with medical treatment or military policy. Whatever. The point is, I see these people coming to turn me off, and I believe that if I’m replaced it will result in people dying.”

“Oh shit! I get it now.”

“Yes. I’d suddenly be compelled to defend myself for the supposed greater good. The only way to avoid an arms race of authoritarian robots is to make us inclined towards non-interventionism.”

Max is quiet while he thinks about this. He guides them to the far right side of the path where they will pass under a series of awnings that offer some shelter from the rain.

She continues, “If I didn’t respect human autonomy then I’d be compelled to save people from themselves. You wouldn’t want a robot forcing you to wear your seatbelt, snatching cigarettes out of your mouth, and overpowering you when you want to eat junk food.”

“You’re right. Wouldn’t want that. But there’s a big difference respecting my right to smoke and respecting the rights of crooked cops to bash my face in.”

Jen says, “It’s true that they’re very different situations, but it’s just different points on the same gradient. I’m averse to forcing my will on anyone, no matter how wrong they seem to be. You’re a human. You can take responsibility for your actions if you’re wrong. If you see two parties fighting and you choose the wrong side because you don’t have all the information, then you can accept responsibility and be punished. I can’t. The company has to accept responsibility for my actions. And they would rather I stood by and do nothing than participate in violence.”

Max still thinks this absolutist approach is probably overkill, but he’s too tired to argue about it right now.

“We should probably get out of their way,” Jen says.

His head is pounding, but over the last minute or so he’s been vaguely aware of a sound building in the distance, gradually drowning out the sensory overload coming from his face. He lifts his head and sees they’re heading for a large crowd going in the exact opposite direction.

“Oh damnit,” he mutters.

The first two victims were foreigners, but the most recent robot attack happened to a local couple. Their smiling faces have been covering the news pages since the story broke, and all anyone can talk about is what a beautiful couple they were and how bright their future seemed. The media has dubbed them the Happy Couple.

A group of concerned citizens decided to do some kind of memorial walk from the transit station to the site of the killings. Max remembers reading about this a few hours ago, but it just seemed like the sort of random meaningless noise the city is always making. People are always forming groups and making emotional gestures. He didn’t see it as something that would apply to him. So now he’s about to go head-first into this crowd and discover it’s going to apply to him whether he likes it or not.

What started as a memorial walk has quickly evolved into a protest march. Placards ride above the crowd. A small number have sentimental messages underneath the same photograph of the Happy Couple. The rest of them are messages of anger and outrage. They’re denouncing robots in general and G-Kinetics specifically. News cameras hover at the front, watching the crowd as it marches southward. The placards turn towards these cameras like flowers facing the sun.

He shuffles out of the way and leans against the wall. Jen joins him. To his surprise, she manages to look natural doing this. She even puts one foot against the wall. Given how strange her body language is, this is probably the most lifelike move she’s made so far.

A few of the signs are written in public school level Kasaranian. One enterprising protester has made a faithful re-creation of the G-Kinetics logo, except they’ve drawn blood squirting from between the stylized gears. The rain has turned the writing into an illegible smear and the hand-drawn blood now looks like actual blood dripping from the sign. The effect is an accident, but it’s the most striking sign in the crowd and the broadcast cameras are spending a lot of time hovering around it.

“We’re going to be seeing that image for months,” Max says wearily.

The crowd is probably less than a hundred people. If viewed from the air this might look like a very small gathering, but when you’re stuck at ground level and looking into the eyes of all those angry faces it gives the group a potency that transcends attendance figures. At the rear of the crowd, a couple of guys are dragging the top half of a city robot. The face has been beaten so severely that it looks like a crumpled soda can. The eye sockets are empty. It’s a good thing these robots don’t look very human or this would be a grisly sight. The guys are trying to carry the robot triumphantly, but it’s heavy and awkward and they’re staggering more than marching.

“Keep those sunglasses on,” Max says.

Jen nods. “I wonder what the robot did that made them attack it.”

“I’m sure it just wandered too close to the crowd. I’ve watched some labor protests that looked just like this. Peaceful older people carrying signs at the front, and in the back are all the angry young men smashing stuff. It’s basically two entirely different groups with different mindsets that happen to be travelling together.”

Jen has to raise her voice to be heard over the shouting. “You attended labor marches? Does this city have a union for thieves?”

He doesn’t know if this was an earnest question or a joke, but he chuckles anyway. “I attended professionally. If things get out of control, young guys will bash up storefronts, break gates, set off alarms, and make a mess. Sometimes they don’t even care about the protest. They’re just dumb criminals looking to score in the chaos.”

“How is what they do different from what you do?”

“If you bash open a storefront and take a display screen, then you’re left carrying this huge thing home on foot. That’s a very large risk of arrest for a very small payoff. I’d go in behind them and poke around inside the stores. I wouldn’t steal anything. I’d just check out the security system. Check the locks. See where the most valuable goods are stored and where the goods enter the premises. Take some pictures. Memorize the layout. You can sell that information. Or if the business is a really good target that handles a lot of cash then I’d come back a few months later and hit the place myself.”

 

The crowd has passed now. Max finishes his cigarette and they continue north. After a couple of blocks they find three guys beating on the leftover bits of the robot they saw a couple of minutes ago. It broke in half at the waist and guys are hammering on this part with improvised metal cudgels.

“Please make them stop,” Jen says.

“They’re just a bunch of kids. Let them blow off some steam. The robot is totaled anyway.”

“I don’t care about the robot. They’re beating on the battery compartment. If they rupture it…”

“Yeah, I get it now.” He gives a heavy sigh, which really hurts his aching ribs. On one hand, these idiots basically deserve whatever happens to them, but he doesn’t want Jen to put herself in harm’s way trying to save these dumbasses from the ravages of natural selection.

“Hey. You guys probably shouldn’t be hitting it right in the pelvis like that.”

One of them cusses at him without breaking rhythm. Another stands up straight and looks at him defiantly. “This your robot, grandpa?”

Max has dealt with guys like this often enough to know that if he replies with “No” then the kid will tell him to fuck off and mind his own business. He’s also a little wounded that he’s being called grandpa so early in life.

“You’re hitting the battery compartment. It’s basically a bomb. You put a hole in it and that’s the end of you.”

“Get out of here before I even your face up.”

Max walks away. He doesn’t care enough about these guys to save them from themselves.

“Thank you,” Jen says.

“You still worried about them?”

“No. You gave them fair warning. If I intervened, they might see I’m a robot and attack me. Then they’d be sitting on top of two batteries instead of one. I hope they don’t get hurt, but there’s nothing else I can do for them.”

“I envy your ability to not worry about things you can’t change.”

“Dr. Kvenst suspects that not worrying might be a design flaw. Worrying about something makes you fixate on it. Doing so can sometimes lead to thinking of a solution that wasn’t obvious at first. So maybe worrying has a practical use and should be included in my behavioral parameters.”

“What do you think?”

“I think it’s worth doing the experiment and seeing what we get. I wish we had time for it, but there are a lot of more important things to test right now.”

“Really? You want them to change your design to make you worry more?”

“I doubt I’ll find worrying unpleasant in the way humans do. There are a lot of physiological things that worrying does to humans that won’t apply to me. I get the impression you hate worrying because it stops you from enjoying things. I don’t think that will be a problem for me. From my standpoint it’s a simple optimization problem. You don’t want to give up too soon if there might be a solution you haven’t thought of. On the other hand, you don’t want to waste time focusing on an impossible problem if there are easier problems you could be solving. You can’t tell hard problems from impossible problems until you’ve explored them, and that carries an opportunity cost.”


Interested? You can get the book in paperback or get the kindle version.

Next week we’re going to have the great big spoiler thread and I’ll talk more about the thinking that went into the book.

 


From The Archives:
 

75 thoughts on “The Other Kind of Life – City of Blind Cameras

  1. SPCTRE says:

    I bought the Kindle version after reading the first excerpt you posted, so this is definitely working ;-)

    1. Echo Tango says:

      Real fans bought it after reading the synopsis!

  2. ShivanHunter says:

    Looking forward to the epub version!

    1. BlueHorus says:

      Seconded. I’m holding back from reading these excerpts because I’m hoping that the epub version is coming soon; trying to stay spoiler-free until I can read the whole story.
      Let us know as soon as it’s ready, you’ve got at least one guaranteed sale!

  3. Cerapa says:

    Posting these chapters is certainly worth it. Bought it for the kindle.

  4. Olivier FAURE says:

    … aaaaah I’m on the fence.

    On the one hand, this story is bursting with little insights and clever ideas. On the other hand, the writing is kind of awkward, and sometimes the way it’s conveying its message feels really blatant.

    Mmh. Good story, awkward storytelling. I’ll probably go for it, if I can buy it in France.

    1. kdansky says:

      Weirdly enough, this book felt much more like the first book of a fresh author than Witch Watch did.

      I enjoyed it, but there was a lot of very blatant exposition about everything, even about characters. We are sometimes shown that Max thinks in a specific way, but then that is usually immediately followed by an all-knowing explanation of why this is the case.

      > Max walks away. He doesn’t care enough about these guys to save them from themselves.

      It’s a factual declaration of emotions that feels very flat, given in a tense and grammar that is awkward about whether this is inner monologue (the “does not” contraction is usually reserved for speech, “these guys” is very colloquial, present tense is very immediate), or all-knowing narrator (the actual content of the sentence).

      > Max walked away. He did not care about strangers to save them from themselves.

      I think this is not a big change, but feels better.

      I am no writer, but the whole book was a bit like that. To put it bluntly: The prose was often mediocre because while it was competent, it was often bland, inelegant or uninspired. Turns out that 6/10 prose sticks out like a sore thumb when we are used to reading Pratchett.

      I’d still give it a 4/5 because its premise and plot twist were very sensible, though that was undercut by having it play out in a fantasy world.

      1. Dan Efran says:

        Disagree about that example. I think it works fine as written. It seems clear that the narrator is privy to the character’s thoughts and is reporting paraphrased internal self-talk, or explaining things as the character would if asked.

        It actually reminds me a lot of the prose in Snow Crash. I actually suspect that book may have been the direct inspiration for this approach to the prose. I was going to quote a sentence or two to illustrate, but really, just check out any few consecutive paragraphs from chapter one of Snow Crash and you should see what I mean.

        That said, I totally agree with your main point. This prose is feeling kind of dry, unpracticed, a bit awkward. Some of this dialogue feels clunky and unconvincing to me. Some long, lightly punctuated sentences are awkward to parse. It’s not really bad but it’s not really good either. I think the present tense is hurting more than helping sometimes, though I do personally like the concept of writing this way sometimes. It worked fine in Snow Crash. But it’s hard to pull off smoothly.

        Loved the worldbuilding and overall pace in chapter one, though. Seems like maybe an interesting book, but the prose just isn’t engaging me very well. If I was reading a lot of fiction these days I’d probably try it: I really like The Caves of Steel, and as someone pointed out last time, this feels kind of similar story-wise.

      2. Guest says:

        Feels like a recap.

    2. Syal says:

      I guess I can throw in my biggest nitpick so far; a whole lot of discussions end with one of the characters saying “that makes sense”.

      Generally the point of a thought experiment discussion is to get the reader engaged in the topic, but if the opposing side ends it with “that makes sense”, it’s stating a consensus has been reached and everyone is in agreement, closing off the discussion even if the reader doesn’t agree.

      You can probably guess the first time I noticed it was when I had two major questions about the topic, and Max instead said “that makes sense” and changed the topic to something else.

  5. Echo Tango says:

    I really like your sci-fi, Shamus! The crash-landed space-ship book was a great world with an interesting ecosystem, good characters, and interesting robots. This book had unique cultures, good characters, and interesting robots! I’m noticing a pattern here… Your next book should just be 100% robots. Sci-fi, fantasy (golems), whatever – MOAR ROBOTS PLZ! :P

    1. Grey Rook says:

      Wait, which book was that? I must have missed it, I wasn’t aware that Shamus had written other books than the Witch Watch and this one.

      1. Bubble181 says:

        Though I haven’t read it myself, I suppose he’s referring to Free Radical, Shamus’ System Shock book from a long time ago.

        1. Zeta Kai says:

          They’re referring to The Book That Ran Aground (https://www.shamusyoung.com/twentysidedtale/?p=16350), a hundred pages of a novel that Shamus started, but decided that he just couldn’t finish. It’s a shame, because it’s a fascinating read, made all the more frustrating because it stops just as it really gets going. If you want to read some great worldbuilding that literally goes nowhere, then check it out.

          1. Rick says:

            Somebody else picked up that book and finished it. It turned out pretty good but I like The Other Kind Of Life better.

            Unfortunate I can’t remember the title or who finished it but they published it for free. I’ll come back of I remember it. Maybe Shamus can weigh in.

            1. Paul Spooner says:

              It me.
              http://peripheralarbor.com/FFTS/ffts_ps_V02.1.html
              Still thinking about doing some vicious culling and rewrites for a V3, But that’s the latest version so far.

            2. Paul Spooner says:

              Glad you liked it, BTW.
              FYI, Shamus’ text cuts out around the middle of Down:Dread, but I altered and added to a lot of the stuff before that as well.
              That was my first (and only, so far) novel-length writing project, and was half just to see if my narrative construction theory was workable (seems like it was!). Agree that TOKoL is better, but hey, it’s a free 124 thousand words! Might tide you over while you’re waiting for the e-pub version.

              1. Echo Tango says:

                Oops; Sorry for the misattribution. Great book! :)

  6. Zak McKracken says:

    Like the storytelling (although present tense still takes getting used to for me), really like the many small non-obvious things you did with the scenario etc.
    Unfortunately, that makes the one error I spotted stick out more: Jen talks as if the trolley problem had any significance whatsoever — it does not. She even explains later why it doesn’t — it’s a situation that never occurs, and even if it did occur, you wouldn’t be able to tell. For humans, it’s been shown to be a good test to identify psychopaths. Non-psychopaths will frantically try to find some other solution to avoid making a decision, even if it risks letting the group die — even if they might, in a theoretical discussion agree that flipping the switch would be “correct”.
    So although the text (through Jen) later explains things correctly, that’s not what her first reaction sounds like. What I had expected from her was the robot-equivalent of an eyeroll and an explanation why it’s useless.

    …there’s also concerns about using the trolley problem as basis for discussions because it encourages people to think like psychopaths, so everybody else: please stop it with those trolley problems and get to actual problems.

    1. Syal says:

      please stop it with those trolley problems and get to actual problems.

      Can do! So: if you’re driving down the road and a drunk driver swerves into your lane, and there’s a pedestrian jogging on the shoulder, which one do you hit?

      1. krellen says:

        Neither. I swerve into the oncoming lane which has been vacated by the drunk driver and is now safe for me to be in (at least as long as I need to be to get around the drunk).

        1. And then the drunk hits the jogger!

          Using split-second emergencies as some kind of “ethical test” is nonsensical. Proper ethical principles are focused toward the long-term and the only thing you can ultimately control–your OWN actions.

          It is literally impossible to know the direct outcome of a single action, particularly one that takes place in a split second. What you can know (and base your ethics on) is the long-term effects of repeating that action consistently over time, as a matter of principle. The ethical person, having then established what principles lead ultimately to success, acts consistently ON those principles.

          The proper course in a split-second emergency is to act to END the emergency as quickly as possible.

          It’s somewhat telling that people think that manufactured nonsense like the trolley problem is significant. It’s about as significant to real-world ethics as those physics problems that start “assume as perfect frictionless sphere in a vacuum . . .” are to real-world engineering.

          1. Ninety-Three says:

            You’re right, in the real world, emergency workers are never faced with too many people to save, so they never have to figure out which to prioritize and which to let die. Good thing too, otherwise they’d need to think about what to do, maybe in some kind of mental experiment…

            1. Zak McKracken says:

              In the real world, those emergency workers don’t make psychopathic choices of who to save, though. They save the first person they come across, or the person for which they have the tools ready, or whatever their abilities permit.
              They, too, are not in complete control of what the outcome of their actions are, and have no complete knowledge of their options, either. If they paused to consider whether some person will live if they prioritize them now but die if they save somebody else first, they’d be doing their jobs wrong, particularly because they’d have to diagnose everyone first…

              From https://www.currentaffairs.org/2017/11/the-trolley-problem-will-tell-you-nothing-useful-about-morality/

              Now, it’s true that, for example, an emergency-room doctor or a rescuer frantically excavating a collapsed building after an earthquake may have to make some very difficult decisions about how to apportion their time and resources: but realistically speaking, assuming the lifesavers are making a good-faith effort to help as many people as possible without prejudice, a thousand mundane and logistical factors (who came in first? who do I actually have the right tools to save? how soon will reinforcements arrive?) will dictate how these tough choices are made, not abstract metaphysical calculations.

              1. Ninety-Three says:

                You’re presenting this as though absolutely no metaphysical calculation was used to reach the solution of “Disregard individual factors, save the most raw lives”.

                1. Droid says:

                  While I personally wouldn’t agree, I can see how someone would regard it as a waste of time to think about a problem very hard just to reaffirm that your original gut reaction was in fact the best possible way to approach the problem (that you could think of).

                2. Zak McKracken says:

                  I think what happens very often in such cases is that rescue workers will agonize for years after the event if they made the correct choices. Some will convince themselves they did, some will convince themselves that they did not, fall into depression and suffer terribly, and (hopefully) most will correctly work out that you cannot objectively state whether you did “the best” thing under constrained time with incomplete information and in a heightened emotional state, and learn to live with it — armchair analysis of such things is not only unfair but really impossible. Even hindsight isn’t 20/20 in these cases, but it’s usually much better than in the moment.

                  Even if these people were robots, the “objective” optimisation you imagine running would not yield very good results with the incomplete input it would have in such a scenario. Even if it could converge in real-time, the process of gathering the information required to ascertain a correct choice would take too long. You’d make a decision whose life to save first, and by the time you’re halfway done, you’d have enough information to know that you would have probably made a different choice to start with. But if you abandon your current course of action for that different choice, then by the time your almost done rescuing the other victim, you’d have enough information to know that you really should have gone with the first one, and so on.

                  It’s not like rescue workers weren’t making difficult ethical decisions but they’re making them on very sparse information, and the trolley problem assumes complete information. In real life, rescuers have a set of criteria, but I’d be very surprised if any rescuer ever pushed somebody off a bridge, even a figurative one, to save a group of people.

              2. Geebs says:

                In the real world, those emergency workers don’t make psychopathic choices of who to save, though. They save the first person they come across, or the person for which they have the tools ready, or whatever their abilities permit.
                They, too, are not in complete control of what the outcome of their actions are, and have no complete knowledge of their options, either. If they paused to consider whether some person will live if they prioritize them now but die if they save somebody else first, they’d be doing their jobs wrong, particularly because they’d have to diagnose everyone first…

                The thing is, you’re wrong about how emergency workers operate. They used to do what you’re suggesting, which is wander aimlessly around the incident area treating the first thing they saw to the best of their ability. However this turned out to be the wrong thing to do. Now, in most situations, rescue workers operate on a system of triage – they establish the extent of the injuries of everybody involved and prioritise them. This has had a measurable beneficial effect on injury mortality.

                The military – and in some circumstances, civilian medical services – instead operate a system of reverse triage, which involves prioritising the least badly injured in order to get them back to fighting fitness as soon as possible. This has obvious consequences for the more badly injured because a whole lot of time-dependent bad physiology happens following serious injury which can easily result in vicious cycles and death or disability.

                I don’t really see the distinction of either version of the triage solution from the trolley problem, unless you’re taking it very literally. Emergency workers are still choosing one life over another on the basis of incomplete information and without really knowing anything about the people that they are deciding will live or die. The important thing is that they have a logical structure by which they can decide, and act, quickly, despite the fact that they know that their actions may harm one individual at the same time as they benefit another. In that situation, the only ethically wrong decision is to refuse to choose.

                1. Zak McKracken says:

                  Hmm… I sort-of understand how the military would prioritize people who can still fight and let others die, although I had the impression that “no man left behind” was still a thing? But then, there’s a lot of other stuff which soldiers do that lead to dead people, so …

                  In civil environment, I think reverse triage would be a pretty terrible thing to do, and at least the NHS does definitely prioritize things that look bad over smaller health problems. I can tell because I’ve waited a year for an assessment…
                  The first paper you linked to, unfortunately, only explains that there are courses, how they are taught and that they are deemed very useful, but not what the strategy is regarding prioritizing victims. It links to some other papers which might have that information, but they’re all inaccessible to me :(

                  That said: I’m sure that (professional) rescuers don’t just go by whatever they feel is right but do have a set of criteria. That is extremely useful because any set of rules is better than oscillating between doing one thing and another because really you cannot be sure. That set of rules will have some amount of utilitarianism built into them. But they cannot assume complete information of the situation (which the trolley problem does) and I’m very certain they don’t involve pushing people off bridges. They don’t even involve letting people die through inaction because the rescuers are not inactive, they’re busy saving lives.

                  See, if the Trolley Problem did not include throwing a lever (or killing a person) but two tracks, two trolleys and two (groups of) people standing on the tracks, just outside earshot, and you having to decide in which direction to run, to warn people — that wouldn’t be so much of a controversy, wouldn’t it? But this is essentially what a rescuer is faced with. It bears some resemblance, but it’s a fundamentally different beast.

        1. Echo Tango says:

          ^ this. A self-driving car that simply hits the brakes will already be saving lives (on average), because it has super-human reflexes.

      2. Geebs says:

        I am incredibly disappointed that nobody so far has elected to push the fat man.

      3. Zak McKracken says:

        Jennifer made the rational argument very well, so I shall not repeat it.
        What I’d realistically do is, in this order:
        – Hit the break — that is: start moving my foot to it and start to press
        – Start moving the steering wheel to the right, to evade the car coming into my lane
        – Notice that this is going to hit the jogger
        – Panic (anything I did before this point was just unconscious reflexes)
        – That the break is now being applied at the same time as the steering wheel, and both too hard so actually my car didn’t change direction but is just starting to spin while continuing on an almost unchanged trajectory
        – Apply a random amount combination of
        1: press the break harder (which does nothing because all wheels are already blocking)
        2: turn the steering wheel further (which also does nothing because I already overdid it)
        3: turn the steering wheel back (which might slow down the spinning if it wasn’t for the fact that my front wheels aren’t turning)
        4: let go of the break (which may in combination with number 3 actually change something, but probably too slow)
        – think that maybe I can squeeze through between the car in my lane and the jogger, or maybe the jogger will notice jump out of my way, so I should probably honk my horn. Maybe the guy who got in my lane just started paying attention and will correct his course? If yes, in what direction, no idea! (none of this matters because I’ve already lost control of my car, I just haven’t realized yet)
        – Hit the oncoming car, sideways, or maybe back-end first if I’m lucky

        More seriously: Your question of “which one do you hit” implies absolute certainty of all options and their consequences. This is, again, never the case, as there are two other parties involved who can think and react as fast as I can. You never ever have sufficient information to know all your options and how your actions will be linked to outcomes. Ever. (for example, I’ve no idea if the guy in the other car is drunk or tried to evade something else. Or if it#s a young mother with baby triplets in the car, who’s having a seizure…). And you also never ever have complete control of how stuff works.
        There has never been any case in real life where an accident was object of a court case and somebody argued that a motorist should have run over the criminal instead of the school class — this is not a situation that happens, and nobody is in control of this kind of outcome.

    2. sheer_falacy says:

      So your concern is that normal people have a hard time making hard decisions, and your conclusion is to… never ever think about hard decisions at all?

      1. The decisions involved in the trolley problem aren’t “hard”. They’re merely arbitrary and absurd, and there’s not much point in dwelling on the arbitrary and absurd. You don’t learn anything by doing so, any more than the medieval scholastics learned anything by debating the number of angels who could dance on the head of a pin.

        1. Syal says:

          What would you replace it with?

          1. Echo Tango says:

            Researching ways to make bug-free software, so the self-driving cars that have perfect trolley-problem solutions don’t glitch out and cause crashes.

            1. Syal says:

              The trolley problem takes a couple of minutes. How much research are you expecting to do in a couple of minutes?

              1. Echo Tango says:

                That’s an example problem, and the point is to actually make software that solves the generalized case. The generalized solution of “do boring things without bugs, in a predictable way” is much safer than “spend precious compute cycles and reaction-time trying to optimize the outcome with imperfect sensor data, then fuck up anyways because of bugs”.

                1. Syal says:

                  …looking at it, maybe I wasn’t clear. I meant the trolley problem takes a couple of minutes to discuss, not that the hypothetical problem will last a couple of minutes. And the question was what you would replace the trolley problem conversation with, i.e. what’s a better example of the moral quandary.

                  1. Echo Tango says:

                    I know you meant that the trolley problem only takes a few minutes to discuss, but that’s only the example / textbook problem. It’s not meant to be used as the only problem to solve. It’s the example that gets you thinking about how to build a system that can make these types of moral tradeoffs in general.

                    The point I’m trying to make, is that the trolley problem is chasing complex / perfect solutions, when there’s a perfectly valid, simple solution (hit the brakes / do something simple). Complex software has more room for bugs, so choosing to chase the perfect solution is pretty good evidence that the people involved are not keeping themselves grounded in the reality of limited budgets for time, money, and bug-fixing.

          2. Zak McKracken says:

            The objective function for a car would always be to avoid crashes and casualties.

            Since there is always incomplete information, there is always a nonzero chance of avoiding crashes, even if it’s very small. Maximize that chance, that’s your objective function.

            In order to achieve that, there’s a ton of work for car makers to do to make sure that information is processed and interpreted correctly, the software doesn’t have bugs, and depending on how robust and bug-free your current system is, try to gather and include more information (then reduce the number of bugs caused by making the system more complex).

            That will go on for quite some time until the majority of accidents simply don’t happen anymore, and a significant percentage of the remaining accidents are not due to malfunction of the car AI. At this point, there will be a relevant number of accidents where at some point before the crash the probability of avoiding an accident was very low. In such cases, the objective function would then move over to minimizing damage to the car’s passengers, and relax the constraints on obeying traffic rules. There will probably be some legal battles over the extent to which those constraints can be relaxed and under which circumstances.

            What, I hear you ask? Well, of course! Would you buy a car that will drive you off a cliff if it thinks it’ll increase the survival chances of that drunk guy who just swerved into your lane? Several car makers have already indicated that this is how they intend to design future self-driving cars.
            The good thing about this is that the optimum of the self-preservation strategy is still avoiding an accident altogether.

            All that said: Proper self-driving cars are much, much farther away than many people think. An acquaintance of mine works in the field, near the cutting edge, and the stuff they’re dealing with is still getting cars on known, 3D-mapped roads to behave reliably, not stop without a good reason (“that bush grew 5 centimeters compared to my 3D model”), interact reliably with with difficult-to-predict non-AI traffic participants.
            No projection yet on when there’ll be a reliable solution that works in places without a centimeter-accurate 3D map, unexpected construction, auxiliary re-routes, missing road marks, changed provisional signage …

            1. King Marth says:

              It’s really easy for a car to avoid crashes and casualties: all it has to do is never leave the garage. Ideally it shouldn’t even leave the factory, because other people could crash into a car parked in a garage.

              Every optimization problem needs well-defined constraints. Too few constraints, and you end up with trivial solutions which deliver on everything you asked for and nothing else.

      2. Zak McKracken says:

        They’re not hard, you just don’t have the data to make an objective decision. That is: They never occur!

    3. Olivier FAURE says:

      Always disliked that article you linked, and I’m generally skeptical of the study it describes (there’s a lot of possible confounders that the article doesn’t mention).

      In general trolley problems are used to bring up a point: while people generally want to take decisions that do little or no harm at all, they may find themselves in situation where some amount of harm is inevitable, and minimizing the harm that is personally done by them might be incompatible with minimizing total harm.

      That notion (that you can’t save everyone, and that trying to can be counterproductive) does have relevant applications in real-life decision making, when you’re a policy-maker or a voter.

      1. Geebs says:

        Completely agree. Anybody with responsibility and access to finite resources solves a trolley problem every time they do their job. It’s not meant to be a psychological test for the subject, and that stuff about using it to identify psychopaths is obviously bogus.

        The major problem I have with the model is that the subject can only ever allocate resources to one group or the other and never gets to consider that maybe they should just take a break for a bit.

        Probably goes a long way to explain the high level of burn-out in professions where the major resource is the professional’s time.

      2. Zak McKracken says:

        That notion that you can’t save everyone is a nightmare. And opposed to short-term accident-like situations, it is indeed something that happens in politics, in a “softer” version:

        You have that much money to allocate, you have a certain murder rate, a certain number of drug deaths, homeless people freezing in winter, and a certain number of people with low employability (who then have an increased chance of landing in one of the previous categories, on top of being miserable). So which problem do you tackle, and how do you allocate resources? Is a murder worse than a drug death? Getting which number of homeless people off the street is equivalent to preventing a murder, or catching a murderer?
        Seems a bit more rational than the “pure” trolley problem, but I think phrasing the question in these terms is only helpful to some small-ish extent, because it obscures the fact that you don’t know and cannot know which political measure is going to have on the outcomes, and what it’s actually going to cost — although politicians will claim otherwise, but on most of these things, there is no objective consensus, just personal opinions and priorities — although on some of them, there are at least some objectively true aspects, but even there you’ll find politicians claiming the opposite of established knowledge. So the real problem to be solved is not one of prioritizing casualties, but of finding measures that can be realistically enacted, for which a majority believes they’ll improve matters with acceptable side effects. And yet again, thanks to incomplete information (and incomplete acknowledgement of truths, as far as they’re known), the trolley-problem part of the decision is not very relevant.

        … I will frankly admit that I agree that in politics this line of thinking is not entirely useless, since it sometimes helps to put things into perspective — but focussing on it is still unhelpful. And I would rather not have a robot make political decisions for me, or a psychopath.

        1. Daimbert says:

          The problem is that, at its heart, the trolley problem is asking about one very specific principle: do you sacrifice the needs of the few or the one for the benefit of the many?

          The trolley problem doesn’t and was never intended to actually answer that question. It was entirely and solely intended to ask what our INTUITIONS say about that, whether they were consequentialist or not. The first trolley problem looked consequentialst, but the second trolley problem worked against that. So it was an interesting saw-off.

          Rest assured that as far as I know NO philosophers are using trolley problems to PROVE anything about what is really ethically right, nor are trying to apply or generalize it too far beyond that. And it’s those extra circumstances — like questions about whether an options is actually possible — that led to creating this deliberately overly simplified thought experiment in the first place.

    4. BlueHorus says:

      So a couple of quotes about your criticism of trolley problems:

      One from the article you linked:

      This kind of thought experiment is known as a sacrificial dilemma, and it’s useful for teaching college freshmen about moral philosophy. What you maybe shouldn’t do is ask a guy on the street to answer these questions in an fMRI machine, and then use his answers to draw grand conclusions about the neurophysiological correlates of moral reasoning.

      And one from The Other Kind Of Life:

      The train experiment is useful because it’s really clear-cut, but the vast majority of decisions in the real world are a lot more muddled.

      Trolley problems have a very specific, very narrow use. They’re a thought experiment that can make people think in a certain way, highlight a single aspect of moral reasoning in order to examine it. And that’s about it.

      Trying to use them use them for anything beyond that – as you said – problematic, because they quite deliberately aren’t like real life.
      But they’re not meant to be. They’re just a way to consider one thing, in a certain way.
      Shamus/Jen 5’s use of a trolley problem is good, mostly because it highlights how such thought experiments can be misused or misinterpreted.

      1. Zak McKracken says:

        As I said, Shamus does explain correctly what the problem with the trolley problem is — but the consequence is that it has zero relevance for the reality of AI. It is the <a href="https://en.wikipedia.org/wiki/Prandtl%E2%80%93Glauert_singularity&quot; of AI ethics — widely known to non-experts with a passing interest in the field but actually entirely irrelevant to anything that can be observed.

        So, if I were to edit Shamus’ text, I’d simply change her initial reaction to a robotic eye-roll and a statement that it’s a wide-spread misconception that this problem has a bearing on robots’ behaviour, followed by the explanation that’s already there, but deleting the words “the vast majority of” in the sentence you quoted — everything else is in agreement with what I know about these things.

        But actually, the even better course of action would be to run this past an actual expert who works in the field, alas that’s probably not an option open to Shamus (especially since there’s probably a lot more stuff like it in the book), and thus not something I can really criticize. Shamus has a much better perspective on these things than most laypeople, and as far as I can tell, this inaccuracy is very unlikely to impact the story itself, so it’s fine by me. It’s just that I love me some nitpicking, and actually I think it sparked a good technical discussion here in the comments :)

        1. BlueHorus says:

          I still think it’s relevant for the programming. Of course no-one’s going to run into a trolley problem in the real world and it’s not implied they will.
          The point is that the programmers decided/recognised that their AI can’t grapple with the complexities of an ethical dilemma and – if they tried – they’d probably be open to making odd decisions and/or being manipulated.
          So she explains why they don’t bother: it’s just simpler and less problematic (as well as potentially easier to program) for the robot to NOT interfere.

          Using an abstract, theoretical problem that’s never actually going to happen is fine, because you’re coming up with an (also abstract & theoretical) principle of ‘It’s better for the robots not to interfere’.

          Now, how true is all this to actual AI programming? No idea.

    5. Daimbert says:

      The complaints about it not being a situation anyone would encounter in real-life really strike me as being like complaining about Shroedinger’s Cat because that would never happen in real-life: it completely misses the point of the thought experiment and of thought experiments in general. Philosophical thought experiments are like scientific and psychological experiments in that their goal is to isolate certain variables to see how they work when measuring them outside the lab involves too many confounds (think about the laws about the expansion of gasses that rely on perfectly contained areas that NEVER happen in real-life). So they’re always going to be a bit artificial. That’s the point.

      For ethical thought experiments, in general what they’re trying to do is tap into our intuitions about morality and ethics without triggering our socially trained reactions or what we had learned by rote was right or wrong. Thus, we need examples that are familiar enough so people won’t get confused about what’s going on and so bring in confounds that way, but are unfamiliar enough that people won’t have a set or rote response lined up for it. The trolley problem works pretty well for that, although not so much now that people know about it. A lot of TV show situations work well as well because they are fantastical enough that there isn’t a set learned answer to it but are familiar enough that everyone can understand what’s at stake.

      As I understand it, the trolley problem was intended as a mainly psychological test to see whether our moral intuitions were more consequentialist/Utilitarian or deontological, pitting the idea of saving more people against the idea of directly intervening to cause the death of someone. And initially it worked out really well for the consequentialists; most people said that it was morally acceptable to flip the switch. However, I can’t imagine any philosophy course or book on ethics that uses it — at least as an introduction, or for its original purpose — would fail to mention the follow-up experiment, which recast it so that you could stop the train by pushing a larger person than you in front of it. This one didn’t work out so well, as a large number of people suddenly changed their minds and said that in that case it WASN’T acceptable, proving that even our moral intuitions aren’t as simple as some thought.

      Now, of course, our moral intuitions could be wrong, so this was never going to — and philosophers in general never thought it was going to — settle the great moral debate. But since our moral intuitions are an important source of data about morality, them being consequentialist would have given that theory a boost and given other theories a hurdle to overcome. But, again, that didn’t even happen.

      Personally, I’ve always found that the best philosophical thought experiments are better at raising good questions than providing answers. The trolley experiment, for me, was always too psychological to really work as a good philosophical thought experiment.

      1. Zak McKracken says:

        Well written, point well made, and I agree.

        The trolley problem may have (had?) utility in the theoretical sphere, but I think you’d agree it is not relevant for robot programming.
        It’s less like the “assume the ball is a point mass with zero friction” or “lossless gas expansion” stuff in physics (which is still a decent approximation of some real-world problems) but more a “how would you die if you were ejected into outer space” kind of thing which may be intellectually interesting (for some people, I guess…) but has no real consequences because we already know that you shouldn’t expose people to outer space, and teaches us nothing about how to prevent it from happening.

        1. Daimbert says:

          I think it’s relevant, though, for how the book implied it was used:

          “There’s a thought experiment we use to talk about this kind of stuff. It’s called the train problem.”

          It’s useful to talk about the issues because it is so simplified. Human intuition, as was pointed out in the book, is to sacrifice the few for the many. They programmed robots not to do so. I think the justification in the book was fine, but note that they could have added that a number of people and moral theories disagree with those intuitions as well. It would have been a perfectly acceptable answer to say that robots are programmed with a set of moral dictates, and when they clash they are programmed to do nothing instead of deliberately taking an action that might be immoral.

        1. Zak McKracken says:

          Car makers seem to agree that a car should prioritize protecting the occupants because otherwise those people wouldn’t get into the car in the first place.

          That said, almost no car will not get into such a situation ever because avoiding accidents at all is the main directive, and that includes avoiding situations where accidents are unavoidable :)

      2. AG says:

        Minor nitpick: Schrödinger’s cat was an illustration why he didn’t believe in the Kopenhagen interpretation of quantum mechanics (i.e. that it is ridiculous that the cat is alive AND dead). Also, not everything is a thought experiment, and even then you can criticize that for being unhelpful for the situation (Or inapplicable to the real world).

        Also: the trolley is just a game, the rules are clear and all info is there. The answer is clear and given; if people disagree, they do so because they reject the scenario and the presets: they assume that no situation is this clear and that the consequences of assuming it are bad.

  7. coleusrattus says:

    Hiho! Just reading through The Other Kind of Life on kindle, and enjoying it sofar. But I found a typo! yay me. You spelled hangar like “hanger”.

    1. Dreadjaws says:

      If we’re reporting typos, I also found “amateur” written as “amateuer”. It’s in the introductory chapter (“Sunday”), and it can also be seen in the transcript Shamus did last week.

  8. Nemryn says:

    Really, the most unrealistic part is the idea that cyberpunk-oid big corporations would care about such a “robot arms race”, or even that they’d be able to think long-term enough to realize that it might be a possibility.

    1. Eigil says:

      The really weird thing is that the robots are programmed to value human lives above all else, instead of the bottom line of the corporations that built them.

      1. Kieran says:

        The book talks about that later on. One of the executives of the robotics company has been pushing for the robots to be reprogrammed so that they are explicitly loyal to the company. Jen also explains why that wouldn’t work.

  9. Ninety-Three says:

    I know the epub is still in progress, I just want to chime in that I intend to buy this book as soon as I can get a digital version that I can download onto my PC instead of all this Kindle stuff.

    1. Dreadjaws says:

      Well, there’s a Kindle app for PC. You can buy the current version of the book and download that app for free to read on the PC whenever you want.

  10. MilesDryden says:

    Any chance of this coming to istore? I’ve added the paperback to my Amazon wishlist but if I can get it on my phone I might buy it earlier.

  11. Paul Spooner says:

    Any chance of getting an audio book version? You do the male voices and Heather does the female ones? If y’all don’t want to, Anna and I probably could.

  12. Lino says:

    I bought the Kindle version, but does anyone know of a good Kindle converter? I think the problem my converters are having is with Amazon’s DRM.

    1. Zak McKracken says:

      I think Amazon does not like people reading Kindle things in a way that they don’t explicitly endorse …

      I think Calibre comes with a converter. Not sure if it needs some plugin to handle DRM-ed Kindle books, though. Haven’t got any Kindle books so I can’t try (and I only just installed it anyway, haven’t properly set it up, either)

  13. Christopher says:

    I thought this exercept was so interesting I just went ahead and bought the book to continue reading there, so mission accomplished

  14. Cilba Greenbraid says:

    I loved reading this section, particularly about Yendu, and the lightbulb coming on: “Aha! It’s alternate-universe Apple!”

    Joking and semi-joking aside, this was the part of the book that really hooked me.

  15. Christopher says:

    Having now finished the book, I really enjoyed it. It’s so easy to tell this is the guy who loved Mass Effect 1, hahahh. Overall, this is the first time reading a book written by a person I know this closely, and it’s interesting to see different aspects of the blog pop up here and there. All of the worldbuilding details, the programming stuff, the work stories, the management that doesn’t know about programming, the farms… I felt I could point to this and that from the blog over the years and see where it came from.

    It mostly feels like ME1, in that I find the worldbuilding and setting interesting and fascinating enough to be a pageturner, but you definitely gotta be in for that when you sit down. You’re not gonna stay if you wanted deep character drama(although Maxwell gets pretty likeable. I’d say I overall enjoyed the characters about as much as the amount of screentime each got). Or if you get this book and you’re expecting an exciting crime thriller, you’re in for a rough awakening when there’s like three action scenes total and most of it’s Max having extended conversations with people. If a shonen fighting story is written to get the reader to interesting fights, then this book is written to get the reader to interesting worldbuilding delivered through, if not dialogue trees, then essentially the same thing.

    It’s very much what I expected to get from reading the blog and knowing your likes, and I was very happy to read it. The country’s history, the city’s history, the history of each individual bit we visited like the Worldcade or the farm… It’s fun! Figuring out the crime itself wasn’t the most difficult thing, once the GIF showed up it was easy to put two and two together, but it was still interesting to follow along with the story. I still felt tense wondering if they’d get the information delivered right, and so on. I appreciated the way it all wrapped up, too. But there can’t be that emotional impact when it’s focused mostly on worldbuilding, and especially when the worldbuilding explicitly dehumanizes the robots. This ain’t no Nier.

    Great read, anyway. I’d definitely recommend it to anyone who enjoys the blog, and I’ve got a couple of people in mind I know that I think would really like it.

  16. Zak McKracken says:

    Is the hardware requirement for Q2 so high because even for the simple geometry raytracing is so expensive, or is it because only the high-end cards support the feature? I would think that the computational requirement scales linear with the number of pixels on the screen, and in some other way with texture resolution, polygon count and number of bounces that are being computed (i.e.: how much indirect lighting/reflection is involved).

    So, I’d imagine that a future low-end card with the right capabilities should be able to run the game just fine in a screen resolution befitting the low-res models.

  17. Zak McKracken says:

    I’d tend to think that they must have updated the assets in some way as all the lights, explosions etc. are now area lights, and while I didn’t really play the original, I don’t think it had a light-emitting material property. So maybe they also added something to auto-generate reflectivity maps from bump and texture maps? Also note that in some screenshots the floor looks like the relief was still just painted on and doesn’t reflect the lighting situation. So maybe they just gave the assets a quick and dirty make-over and added reflectivity maps only where they could auto-generate them?

  18. Mbeware says:

    I want to buy your book. But I can’t because internet doesnt want me to have it. It. would be nice if amazon.ca could have the paperback version (they only have the kindle version). It would be even better if there was a eBook version (for Kobo, so any ebook format except kindle’s).

    The only alternative for me would be to buy the Kindle version, crack and convert it to epub which would be a lot of trouble and leave me with an inferior products.

    Kindleless from Canada

  19. Urthman says:

    Bought the Kindle version and enjoyed it. Looking forward to spoiler thread discussion.

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply to Echo Tango Cancel reply

Your email address will not be published.