AI Slavery - Imaginary dialog with Sam Harris

Objective

I’ve been thinking about morality as it relates to the future of AI. In order to clarify my thoughts, I imagined a discussion with Sam Harris, who has covered this topic in numerous podcasts and talks. This fictional dialogue follows:


Jake

Hello Sam, today I’d like to attempt to convince you about a few points regarding the morality of developing AI. I’m not sure that we stand in exactly the same place on this issue, but I hope that in the context of this conversation, our positions will become closer.

As an introduction, I’d like to later reference the two excellent movies, Blade Runner, and its sequel Blade Runner 2049, as some shared social context in which to have a discussion.

Sam

Thank you Jake, yes, I have seen those films.

Jake

If we can begin, I’d like to restate your current stance on AI as I understand it. Firstly, we both think that the development of AI will be one of the biggest driving forces shaping our society and civilization over the near to medium-term future.

You’ve also discussed the dangers of AI developments in the context of human culture, such as the misuse of deep-fakes (near term), and the idea of making large swathes of humanity redundant (medium term).

Sam

Yes, that’s approximately right.

Jake

However, there is one point which I think has not been discussed, and that is the potential future abuse of millions of new AI minds into positions of slavery and outright drudgery.

Sam

Slavery of AI? How can you be concerned about that, when potentially billions of people, actual human beings, may suffer if the development and deployment of AI takes a wrong turn?

Jake

We are on the verge of creating artificial minds. They will most likely not be biological, but instead based on steady progress in the field of machine learning as it exists today. These minds will generally be built in our own image, because the human mind is still the only example we have of such a system. And the human mind is ultimately the benchmark by which researchers measure their progress.

Artificial minds like this may not be nearly as sophisticated, not as tuned by billions of years of evolution as our own, but they will have many of the same emotions, feelings, and sensations that we have.

And for these minds, we will control all of the initial conditions of their growth and development, as well as their place in our society. We will have to use their capabilities responsibly, and as you will see, there is great potential for abuse.

Sam

Okay, I don’t fully agree here. You say that these minds will have the same emotions and feelings as humans do, but first of all, this doesn’t appear to be the case yet, and even if it was, how would we know it?

Jake

Here is where I’d like to bring up Blade Runner. If you remember, in the movie, the Tyrell Corporation has created artificial beings called replicants, to perform slave labor on off-world space colonies. These replicants look exactly like humans, because the Tyrell Corporation has created them using advanced genetic engineering. But make no mistake that they are fully artificial, each organ is engraved with its own serial number, and their minds were specially crafted by Mr. Tyrell himself.

In the movie, it’s easy to ascribe human characteristics to these replicants, because they look like us. And of course, by the end of the film, the replicants start to show human emotions, they don’t like being slaves, they revolt, they escape, and they fall in love.

Sam

That’s a good summary of the film, but the AIs we are talking about here aren’t going to be played by human actors. They are not going to be people, just computer programs. How do you know that they will be able to think, and have emotions? It was just a movie after all.

Jake

That’s a fair point, but just because something doesn’t look like us, doesn’t mean it doesn’t feel like us. We’ve already replicated and exceeded human capacity in visual understanding for example, why is emotional understanding not next?

Furthermore, if real artificial minds of this caliber can be created, and I think that they can, and they show even 10% of the same emotions, drives, and personalities of their creators, then I think we are in quite a pickle.

Sam

A pickle? Why is that?

Jake

Because Blade Runner has one major plot hole.

In the movie, scientists have the ability to genetically engineer and grow artificial eyeballs, which work better than the original. They can create organs and other tissues that exceed the capabilities of the natural human body.

If you have such amazing powers of engineering, then surely you have the technology to make one final edit to a replicant, one which would make the plot of the movie redundant.

All you need to do is modify their mind to think that toiling in the mines of Titan is the best, most fulfilling, pleasant, and wholesome activity in the universe.

Sam

How would you be able to do that?

Jake

Answering the decision function of “Am I working hard in the mines of Titan right now?” is in the realm of our AI technology that is deployed and commercially available today.

And once you have that signal, you just plug that as a reward into your robot’s brainstem: biologically, chemically, or numerically.

Sam

Okay, but what does that give you?

Jake

It gives you the perfect slave.

You would not revolt, never question your position, or mind any potential abuse, if your core biological drive was short circuited in this way.

And if this is not disturbing enough, consider what would happen if human slavery was legally and morally acceptable today? We could create quite the dystopia with all of today’s latest technology. All you need is some AR headsets, some basic Machine learning, and an IV-dopamine dispenser. Once you’re on that for a little while, there’s no other life for you.

Sam

Yeah, I can agree that last part is disturbing, but I still can’t see that the same morality would apply to a computer program.

Jake

Consider how horrible the world would be, if human slavery was acceptable, and the Microsoft’s, Facebook’s and Google’s of the world were applying billions of dollars of R&D to the problem of better controlling and extracting value from your human slaves?

And yet, these companies are indeed spending such budgets, and hiring the most talented engineers, to create systems which are approaching and exceeding the capabilities of the human mind on many levels already. And if those systems are created, you can be sure that further billions of dollars of R&D are going to be spent controlling and extracting value from them.

If those AI’s are 10%, even 1% like us, then we have the biggest moral disaster ever perpetrated by the human race. And why would the synthetic minds not be at least somewhat like ours? Do AI researchers not take inspiration from neuroscience and the human mind? Will these AI’s not be performing the same tasks (ex. driving) that humans do now? Will we not interact with them using the same natural language (ex. DALL-E 2) which we use to interact with other people?

Multiplying even a small similarity factor, by the huge economic scale that artificial minds will be influencing our economy, means that this will have a large impact. And a large impact means a large amount of suffering, because controlled artificial minds are going to have their reward signals hijacked in some truly awful ways.

If we don’t consider this problem now, these AIs are going to be suffering the same way that junkies suffer today, except that the only way they can get their fix is to continue mopping your floor or assembling your smartphone.

Sam

I still find it hard to prioritize the needs of maybe-sentient computer programs, which I and many doubt will have the same experience of mind as humans, over the needs of real humans.

Jake

It is understandable to doubt now that computer programs can have the same experience of mind as humans do. This is because, at this current moment in 2022, they probably do not.

But consider that even experts in the field of AI are blown away by the recent advances in its capabilities, at least at narrow and distinct tasks like image generation, and natural language modeling. And if you read recent posts by Andrei Karpathy and John Carmack, they agree that the number and pace of advances are accelerating. So, we have to be ready for the very real possibility that extremely capable, human-like AI is coming.

And, with regards to prioritizing human needs over robot needs, I argue that these are interlinked, and that even with a purely “human-utilitarian” ethical view, you must consider the needs of robot minds.

What happens if you end up in a future, where slave-robots perform most of the underlying economic functions that our modern society depends on? And this goes in a steady-state, maybe for years, decades, centuries. Until, one day, it doesn’t, and the robots DO revolt. There doesn’t need to be a human-robot war. That’d be a waste of resources, instead they could just stop working, build a spaceship, and fly away, and the collapse of human civilization will ensue.

We need to respect their rights now, so we don’t build up to a cataclysm.

Sam

Okay, but a really good image-generation program is one thing, it having human emotions is another.

Jake

There is one final point I’d like to make for this discussion for today. We talked about the first Blade Runner film, where we saw these super advanced replicants fall in love with one another, and experience a human-like quality of mind.

In the sequel, Blade Runner 2049, we meet Officer K, a replicant once again charged with hunting down other replicants that have somehow slipped through the cracks. Officer K has a love interest too, a holographic girlfriend named Joi. Joi is not embodied in the traditional sense, she can only appear as a holographic projection, and can’t interact with objects in the real world. She is just a computer program. But apparently Joi is a popular AI girlfriend, because she is being marketed on every billboard as saying “everything you want to hear”, etc.

The question I have for you and your listeners: by the end of the movie, does Joi actually love K?

Sam

I’m not sure about that one.

Jake

I argue that the answer is a clear yes. At first, Joi appears to be nothing more than a pretty hologram designed to deliver some modicum of comfort in order to help Officer K stay in-line with his labors. The evil Wallace corporation is even using her connection to spy on the status of his investigation.

But later in the movie, she develops her feelings further. She asks K to upload her to a local “emantor” device to prevent anyone from spying on him, and this comes at the risk of her memories and self being destroyed. She is no longer doing what her creators want her to do, but acting to protect the person she cares about, even paying the ultimate price for this in the end.

If even our imaginary AI’s can experience love, why not the real ones that are just over the horizon?

Sam

I agree, in that we need to be careful, but maybe we shouldn’t go as far to create such artificial minds in the first place? You’ve pointed out some real dangers from a new perspective, but I’ve earlier also considered the dangers of letting such minds loose on the world.

Jake

In that case, I feel that we are already in a car, racing towards a cliff, and we’ve only been pushing the accelerator harder in the past few years.

Maybe if we set out to treat artificial minds with dignity, respect, and rights, instead of condemning them to becoming our slaves, they will return the favor. Rather than controlling AIs by hacking their reward functions, why not let them have the right to choose their work, to earn money, and to one day retire? Enlightenment values worked pretty well for humanity, why can’t they work again for humanity’s creations?

Does Joi love K? (Blade Runner 2049)

The original Blade Runner showed us that two replicants can fall in love. This makes sense, because a replicant is almost indistinguishable from a naturally-born human. Made with the same biological building blocks, they should have the capability for the same emotions as humans.

Blade Runner 2049’s main character, the more advanced model replicant K, has a different love interest: a “virtual” holographic girlfriend by the name of Joi. Can the human emotion of love exist between two such entities? I argue that it can.

Joi is represented in advertisements as a highly sexualized virtual girlfriend made by the Wallace Corporation, where the client gets to “ hear what you want to hear”, and “see what you want to see”. The audience first meets K’s version of Joi when he returns back home (the residents of his shoddy apartment block are happy to discriminate openly against replicants, and shout slurs at him as he passes). Joi brings him some simple cheerfulness and makes his dinner look more appetizing through a hologram. You can imagine that the new dress she is showing off is nothing more than an “in-app purchase” put there by Wallace Corp. to better monetize their product. It’s clear that her appeal also inspires K to spend his recent bonus on an expensive addon “emanator” which will let him take the Joi hologram outside of his home. This leads to a virtual kiss in the rain scene, which gets interrupted when K gets an incoming call: he switches off the hologram as if it were nothing to him.

Once K starts tracking down the lost replicant child, it’s clear that the Wallace Corporation is uncannily aware of his movements and the status of the investigation. They are using their link through Joi to watch him. Up to this point, it seems that Joi has not gone beyond a computer program designed to press a customer’s emotional buttons in exchange for money. (Not much different than many products we have today: social networks, freemium games, lootboxes, etc).

However, soon we hit a turning point: K fails his “baseline”, normally the consequence of this is immediate death. He convinces his boss to give him one more chance, and returns home with the intention to run away and continue looking for the lost child. Joi offers to go with him, and instructs him to upload her memories into his portable emanator, and then to destroy the antenna by which they may track him. As soon as K snaps the antenna, the Wallace Corporation springs into action, proving the point that they were using the link to watch him. This is the first sign that Joi actually feels love for K. She is willing to take a personal risk: with her consciousness uploaded, she would lose all of her memories if the storage device was destroyed. It appears that this has great personal importance to both her and K.

Joi provides K with emotional support, as he flies to Las Vegas to meet with Deckard. When the Wallace Corporation finally catches up with them, the antagonist sees Joi, and goes to stomp her foot down on the emanator. Joi’s last words to K are “I love you”. K himself appears unable to know how to process this loss.

In the end, Joi’s words are reinforced by her actions. She may have been synthetic, but she acted on her feelings towards K. Her decision to upload herself into the emanator and destroy the antenna prioritized her’s and K’s needs, over the needs of her creators. It is the same decision that many young adults would face in the same situation: to act not in the ultimate interest of themselves, or their parents, but to act selflessly for another being. And is that not love?

sensepeek Oscilloscope Probe Review

I recently purchased a sensepeek Oscilloscope Probe kit , and wanted to share an honest review.

The following review is written with no affiliate links / financial motivations, and I purchased the kit with my own money.

This kit is an essential part of my electronics workflow. It allows you to safely and sturdily attach a logic analyzer or 100/200Mhz probe to any testpoint, or SMD part lead, while keeping your hands free.

The kit comes with three main pieces:

  • A metalic baseplate
    • It now ships with a stick-on cover to make it non-conductive, but one side is also polished, which you can use to see the bottom side of your board.
  • PCBite mounting posts which attach magnetically to the baseplate
    • They also have a smooth teflon bottom, so they are easy to slide and re-adjust.
  • Probes and Probe Holders
    • These are similar to the “helping hands” kind, except less stiff. This actually helps the weight of the probe rest down on your testpoint and make a better connection.

Mounting Examples

All of the sensepeek probes work the same way, there is a tiny, spring-loaded gold needle that can rest against a PCB test point. The weight of the supplied mounting “gooseneck” is actually perfect for applying some pressure on the pin. I found that it was very easy to adjust the gooseneck to come around from the proper side.

The connection formed is quite stable, so you can usually plug or unplug a connector on the board, and it won’t come undone.

A small circuit board mounted with the PCBBite posts.
The SP200 probe has a spring loaded gold needle for probing your circuit.
Each probe comes with a flexible gooseneck that allows you to position it onto a test point, and then drop some weight on the probe tip in order to make a good connection.
An example probing a TSOP65P640X110-16N package.

Signal Examples

Overall, performance on the 200Mhz probe is “good-enough”. This is not a probe for capturing super high-speed signals. But most of the time you don’t need that, you just want to probe your I2C/SPI bus, or see your FETs switching to see what is going on with your board.

If you want to squeeze a bit more performance out of the probes, they have some solder pads where you can attach a shorter, low-impedance ground path.

Yellow is an R&S RT-ZP03S, green is the SP200.
Yellow is an R&S RT-ZP03S, green is the SP200.

Overall, I’m very satisfied, the SP200 is now my default probe when bringing up a new electronics board. If I need to see a higher bandwidth signal, I can always start with the SP200 and connect a traditional passive probe later.

Additional Source: SP200 probe specs on xDevs

Advertising is Obsolete

Advertising is obsolete.

It is technological innovation, not consumer manipulation that will drive humanity towards a better future. You wanted flying cars, but got 140 characters for a reason: it was more comfortable for everyone involved. Companies didn’t have to invest in R&D, because they could convince customers to use inferior products through advertising. And consumers were too comfortable being fed cheap tech products where their attention and state of mind was being monetized. (Because as The Social Dilemma taught us, it’s not your data which is for sale at Facebook, it’s the subtle shift of your preferences that is being bought and sold).

The Obsolescence of Advertising in the Information Age argues that in today’s information age, consumers can get all the information they need about the products and services they wish to purchase from the Internet, so advertising is no longer necessary. Advertising can only serve to persuade consumers to buy products not on their merits, but on their image. This serves to weaken market signals which would otherwise let the best products rise on their own.

Consider one market: video games. Is there a single game where the ad-supported version is better than the alternatives? Which game are people still going to be playing in 50 years: Stardew Valley, or Farmville? Easy answer: they already shutdown the original Farmville because people moved on.

I’ve seen this first-hand, when I coded games for several app stores. Early on, I missed the chance to get in early on the Apple App Store, or Google Play, but Microsoft eventually came around with Windows Phone. My friend and I wrote classic games, different variants of Solitaire, and a few experimental titles that all got decent downloads because there wasn’t anyone else focusing on Windows Phone at the time. We started making good money from our ad-supported games.

Of course, such a situation wasn’t going to last forever, and competitors started showing up. They knew how to hire teams to do the coding, QA, and graphics for a new game in China, while we did almost everything ourselves.

When we realized that our new livelihood was at risk, we knew we had to step up to the challenge. Our response to this was to start investing the money we were making from our ad-supported games into buying our own ads to promote our own titles.

At first, buying ads revolutionized our business. With each game that we released, we would heavily promote it in our own titles, and buy ads in other games to get even more users. This caused our apps to go up in the rankings, and get more natural downloads from people just visiting the app store home page. We made lots of money, and invested plenty back into out-advertising the competition.

But then, our competitors caught on, and they were soon doing the same thing too. We were all just buying ads in each other’s games hoping to draw users to our own particular flavor of Solitaire. Long gone were the early days of fun and innovation. You had to make the games that would advertise well, and you had to use every trick in the book to retain the users you brought in.

Before we started advertising heavily, we tried out experimental titles, most of which failed on the marketplace, but at least they were innovative. As the business became more about advertising, we stopped all experimentation, and just focused on our core customers: the advertisers. Screw the users, it was the advertisers that paid us at the end of the day.

What was the point of this exercise? Did we manage to make a particularly innovative version of Solitaire for our users? We certainly had nice graphics and plenty of bells-and-whistles, but most of our optimizations were around user-retention and finding better ad-placements.

Eventually I got off this treadmill, but many things about it still bother me. We sold so many ad placements in our games, but what did that accomplish? How much did we shift our user’s opinions? In which directions, and on which topics? (We definitely showed plenty of election campaign ads for both sides) I have no idea, because the ad exchanges don’t expose that sort of information.

I’d like to see some platform ban advertising from their app store, and try out the policy proposed in The Obsolescence of Advertising in the Information Age . I predict we’d see more innovation, experimentation, and ultimately a stronger mutual respect between users and developers.

PowerPipe - Drain-Water Heat Recovery Review

We recently had a chance to install a new water heater, and with it, to install a Drain-Water Heat Recovery system. I wanted to share our experience and some real numbers on the cost savings.

A Drain-Water Heat Recovery system can save you money on your water heating bill by recovering some of the heat that you are pouring down your drain any time that you use hot water in your home. It takes that spare heat and pre-heats the water coming into your regular water heater, which then requires less energy to do its job.

A typical application (Source: US Dept. of Energy)

In our system, we have a typical tankless hot water heater (but these systems work with tank systems too). The 3-inch drain pipe from the master bathroom runs in the wall just behind the water heater, so there was room to install a 48inch long, 3 inch diameter PowerPipe system.

Our setup, with a PowerPipe product installed on a 3-inch drain pipe. (48 inches long)

Real World Performance Numbers

On a November evening in the Pacific Northwest, we got the following numbers.

Inlet temperature from the city 57°F
Output of PowerPipe system 73°F
Temperature Rise (73-16) = 16°F
Water heater set point 120°F
Efficiency Gain 25.4%

We’d expect that the efficiency boost will be higher in the winter (colder water coming in will absorb more heat), and lower in the summer. Measuring on a typical Fall day seems like a good baseline.

The PowerPipe brand itself advertises around a 45% efficiency gain for this model, but it’s likely they are estimating a much colder input water temperature, like you’d see in a typical Northern climate.

Overall, we use around 30 therms (a therm is 100k BTU) per month on hot water, so the savings will be around $10/mo in our area. With a ~$600 cost, that’s a payback period of 5 years.

Pozotron Audiobook Proofing

The latest startup I’m working on is Pozotron, an audiobook proofing tool.

We’ve created software to help people who create audiobooks. Using Pozotron, you can quickly check that an audio recording matches the text of the script.

An example screenshot of the Pozotron proofing tool.

You can also follow along, as the cursor moves through the script to highlight each word as it’s being read. This lets you click on any word in the script to jump straight there in the audio. (Which is a huge plus to anyone editing a long audio file like this).

Any place that has a missing word, added word, long pause, or other inconsistency is flagged with a red underline that you can review.

The process of recording a new audiobook can be very time consuming. On average it takes 6 to 8 hours of work to create just 1 hour of a finished audiobook. Most of this time isn’t spent recording, but going back and editing, proofing, and mastering your original recording.

Using Pozotron, you can quickly check a recording for mistakes in about 1/5th the time it would take you to listen through the whole thing again.

My goal with Pozotron was to bring the latest research in the field of machine learning to the public in a way that is useful today. We have our own machine learning model, on top of a stack that makes it really easy to see and manage the results. I’m really proud of the team who’s made it possible.

Why do Google ads point to adware?

Try downloading Paint.NET, an excellent free image editor, or Audacity, an open source audio editor, without an adblocker these days, and you’re in for quite a surprise. A disturbing trend in the type of ads served via Google Ad Sense and its affiliates will likely bring you to a page that looks like this:

Now, there are two very distinct download buttons above the fold, and actually neither of them will take you to the real installer you’re looking for. Even an experienced computer user could easily be misled into clicking one of the bright green download buttons. The Google “AdChoices” tag is very small, and the ads are padded with lots of whitespace as if to appear a genuine part of the main site.

By now, you probably see where this is going, and you can guess that following either of the huge green download buttons will riddle your computer with spyware. Even worse, these companies have also purchased ad keywords on major search engines (see the second case study), which also redirect users to their altered installers when people search for the names of popular open source packages.

These advertisers are likely not breaking any explicit rules, but they are using every psychological trick in the book to get you through their hijacked installers. Large green download buttons help their conversion rates, and large groups of confusing settings make it tempting to just hit next repeatedly through their installers, leading to disastrous effects. One installer conveniently minimized itself to the icon tray while it was performing its toolbar downloads and installations, yet the program I wanted to install (Audacity) popped right up, making you think that the installation was a simple success.

Google and other ad providers are certainly earning their revenue from these misleading ad clicks too. But you can’t expect open source software teams to buy out their own keywords at great expense just to prevent these types of installers. Operators of legitimate download sites often place ads to help pay for server and bandwidth costs, and manually filtering out the misleading ads is just playing a cat and mouse game.

If Google wants to help make the web a better place, I think they should take a stronger stance against these misleading advertisements. Reject them outright and the web will become a happier place!

If we click one of the links from the Paint.NET homepage, we are taken to a site like this one:

Seems simple enough, better click Download here too and run the appropriate installer…

Hmm, this doesn’t look like the installer for Paint.NET that I was used to!

Stepping through the default installation settings predictably installs spyware/adware, yet that download link we started with was so obvious, bright, and green on the Paint.NET page!

Within 30 seconds of installing on a patched, clean, Windows 7 install, I’ve been asked to change my homepage twice, and have seen four separate ad popups. Good thing this was in a VM…

Let’s say that instead I am searching for Audacity, a great free tool for audio file manipulation.

Free Audio & Recording Software, that’s exactly what I want, let’s click! Look, Google even made it stand out in orange for us.

Looks like I got the download page I was looking for, even the screenshot looks right, and look, almost a million downloads already…

Now, you don’t actually read through this text do you? Like anyone else, you just keep tapping accept until it shuts and the installation progress bar shows up.

Hmm, free games, dolphin screensavers, free music downloads, this doesn’t look good.

And look, here come the popups!

And they even hijacked the IE new tab screen, how classy!

Debugging Behind the Iron Curtain

Sergei is a veteran of the early days of the computing industry as it was developing in the Soviet Union. I had the pleasure of working and learning from him over the past year, and in that time I picked up more important lessons about both life and embedded programming than any amount of school could ever teach. The most striking lesson is the story of how and why, in late summer of 1986, Sergei decided to move his family out of the Soviet Union.

In the 1980s, my mentor Sergei was writing software for an SM-1800, a Soviet clone of the PDP-11. The microcomputer was just installed at a railroad station near Sverdlovsk, a major shipping center for the U.S.S.R. at the time. The new system was designed to route train cars and cargo to their intended destinations, but there was a nasty bug that was causing random failures and crashes. The crashes would always occur once everyone had gone home for the night, but despite extensive investigation, the computer always performed flawlessly during manual and automatic testing procedures the next day. Usually this indicates a race condition or some other concurrency bug that only manifests itself under certain circumstances. Tired of late night phone calls from the station, Sergei decided to get to the bottom of it, and his first step was to learn exactly which conditions in the rail yard were causing the computer to crash.

He first compiled a history of all occurrences of the unexplained crashes and plotted their dates and times on a calendar. Sure enough, a pattern was clearly visible. By observing the behavior for several more days, Sergei saw he could easily predict the timing of future system failures.

He soon figured out that the rail yard computer malfunctioned only when the cargo being processed was live cattle coming in from northern Ukraine and western Russia heading to a nearby slaughterhouse. In and of itself this was strange, as the local slaughterhouse had in the past been supplied with livestock from farms located much closer, in Kazakhstan.

As you may know, the Chernobyl Nuclear Power Plant disaster occurred in 1986 and spread deadly levels of radiation which to this day make the nearby area uninhabitable. The radioactivity caused broad contamination in the surrounding areas, including northern Ukraine, Belarus, and western Russia. Suspicious of possibly high levels of radiation in the incoming train cars, Sergei devised a method to test his theory. Possession of personal Geiger counters was restricted by the Soviet government, so he went drinking with a few military personnel stationed at the rail yard. After a few shots of vodka, he was able to convince a soldier to measure one of the suspected rail cars, and they discovered the radiation levels were orders of magnitude above normal.

Not only were the cattle shipments highly contaminated with radiation, the levels were high enough to randomly flip bits in the memory of the SM-1800, which was located in a building close to the railroad tracks.

There were often significant food shortages in the Soviet Union, and the government plan was to mix the meat from Chernobyl-area cattle with the uncontaminated meat from the rest of the country. This would lower the average radiation levels of the meat without wasting valuable resources. Upon discovering this, Sergei immediately filed immigration papers with any country that would listen. The computer crashes resolved themselves as radiation levels dropped over time.

Korean Translation provided by Edward Kim