Friday, September 27, 2024

Extinction risk from AI

 Link from Twitter conversation

Preamble

I don't just care about extinction, I also care about societal collapse. Distinguish catastrophic risk from extinction risk. Helen Toner suggested the worst case scenario would be extinction, but "everyone but 1 million people dies, and we go back to the bronze age" is still really bad and worth regulating against. But as requested, I'll try and focus on extinction here.

Link to generalizing from fictional "evidence".

The claim that "doomers" don't ever give concrete scenarios is false:

Link to Gwern's story - but that's too technical.

Link to "the message", Eliezer writing from the perspective of a more advanced civilization of AIs. It doesn't feel like a concrete extension of the present day, but the conclusion is "this is how 'everyone falls over dead at approximately the same time' could happen - if facing an adversary that thinks much faster than humans".

0. Short scenarios

I'm going to be thorough and support why each of these scenarios is "plausible" later, but just to give a short answer to the short question posed by my Twitter interlocutor, here are some scenarios I find could plausibly lead to human extinction from AI.

Biorisk

An omnicidal cult uses AI to design a novel pandemic virus, which is as transmissible as measles, 90%+ lethal, and causes no symptoms until a month or more after infection. They release it in several airports at once, it quickly infects every reachable person in the world, and a month or more later, a mass dieoff occurs. It's plausible this still wouldn't lead to full human extinction, because maybe there are uncontacted rainforest tribes or other isolated populations who don't get infected. But you didn't ask for something that would definitely lead to human extinction, just something that would plausibly do so. And it's plausible that if we lose 90%+ of the population, we don't recover.

Societal collapse through coordination failure

As I thought more about this, it became clearer to me that this one probably wouldn't lead to full-on extinction. But I'm leaving it in, because it can lead to collapse of present-day society, and that seems bad enough to be worth doing something serious about.

The basic idea is, AIs get better at persuasion (this is one of the key risks listed in the AI companies' responsible scaling policies, by the way TODO: links). And in a few years, we have a situation where text, audio, pictures and video, are not reliable guides to what is real and what is not. See: Twitter, but much more confusing with many tweets composed to be maximally persuasive to the person they're sent to. At the same time, AI helps out in various areas of scientific research, meaning our technological capability goes up. So we have more power individually and in small groups, with less ability to coordinate large groups to respond to problems. Result? Societal upheaval, possibly wars, possibly nuclear or biological risks. I don't see a really plausible path from those to "every human on earth is dead", but "75% of humans on earth are dead"? Sure.

Death by oopsie

The biosphere exists in a relatively narrow range of habitable temperatures. There are lots of ways an advanced AI that did things equivalent in size/power to a nation-state's economy could kill us accidentally, and it's plausible that once AIs get to that stage (which is not far off) they grow much larger in terms of energy use. Just fission reactors, no new physics or tech advances required, run in sufficient quantity, could move the temperature at Earth's surface out of the habitable zone. This may not happen within the next 5-10 years, but if we get to a situation where AI has effective control of the future and we do not, which we plausibly could within 5-10 years if we're not very careful, then the fact that the scenario takes more than 5-10 years to play out because building a bunch of power plants takes time is kinda irrelevant. Once you're in an inevitable checkmate, the fact that the number of moves before you lose your king is >1 doesn't matter to the outcome.

I realize the key step I'm not explaining here is "we get to the point where AI has effective control of the future". More on that below, but it's long.

Extinctification on purpose

Current AIs are at human expert level in many knowledge tasks. (TODO: Links, throughout this entire paragraph). The AI companies think we'll get to "ASI", where AIs are human level or above on all tasks, including coordinating the movement of physical objects, within "a few thousand days", easily within 5-10 years. Surveys of AI researchers at conference put the (median) guess at when this will happen at 2040. Once we get there, there is no reason to suppose "humans are the smartest thing that can exist, therefore we don't have to worry about AI that is smarter than the smartest humans". In fact, we do have to worry about this, and once we get there, if that AI for any reason would prefer we not exist, that is what will happen, in the same way that if we would prefer a species of nonhuman animal not exist and decide to make it so, that is what will happen. There are many plausible ways this could happen, but the most plausible scenario in my mind is that a smarter-than-human AI does something I didn't think of and none of us have planned for or know how to respond to, in the same way that a puppy might think "if I want to hurt something, I bite it" but we have lots of options that are entirely outside of the puppy's space of available mental concepts.

Which of these do I think is most plausible?

#1 is the most immediate and concrete, and needs the least in terms of technological advancement from where we are currently. #2 is almost inevitable, but unlikely to fully kill us. #3 is almost inevitable, but we can avoid it if we coordinate. #4 is more speculative, but similar to #3 in that if we don't coordinate it's very likely, but we can decide not to build that kind of AI until we understand AI a lot better/not build it at all.

I don't know how the future will go. I think it's likely to be weird, within the next few years, and that weirdness can plausibly be extinction-level dangerous.

More on each scenario below.

1. Biorisk

According to the 80,000 hours podcast interview with <name>, it is possible to create a virus that:

1. Is as transmissible as measles (very, very transmissible)

2. Stays asymptomatic for long enough to spread (weeks, months, or years). There are viruses that do this, a skilled person could create one.

3. Is 90%+ lethal. There are viruses that are this lethal, a skilled person could create one.

And, also, AI assistance lowers the bar on how smart and capable you have to be to make a novel virus. Used to be you had to have a Ph. D. and wet lab experience. Currently it's at ~undergrad level, meaning hundreds of thousands of people worldwide can do it, if they put their minds to it. The level will only stay the same or go down from here, most likely go down. 

And, also, there are groups that genuinely think the world would be better if humanity went extinct. Up until now this hasn't been a big problem, because the Venn diagram between "people who want humanity to go extinct" and "people who can create a bioweapon that will cause a global catastrophe" had no overlap. But as we make the circle of "people who can create a bioweapon" much larger, and as time passes and the circle of "people who have ever thought it would be good if humanity went extinct, even if they later changed their minds or died" gets larger, the chance of an overlap goes up.

And if we get an overlap, and we haven't put in place countermeasures (adequate countermeasures are not currently in place, but the 80,000 hours episode discusses what we could do if we were serious about preventing engineered pandemics, it's very long but worth a listen) what that scenario looks like in my mind is this: At T=0, someone makes a new pathogen, and members of their omnicidal cult release it in several airports. It isn't detected, because nobody is symptomatic, but it spreads globally, infecting ~100% of the population, with the exclusion of hermits who live on their own homesteads and uncontacted rainforest tribes. A month later, people start getting sick, and dying. Somewhere between 90 and 99% of people who are infected die, and because the people who could find a cure are dying themselves or busy trying to protect themselves or their families, or impacted by the fact that the people running critical infrastructure like electrical and water treatment facilities are dying, they don't have the ability to find a cure in time. So we lose 90-99% of the human population.

From there what happens is more speculative. Technically, there are still those uncontacted Amazonian tribes, who could repopulate the earth, assuming they exist in the first place (I think they do?). So maybe this doesn't count as "extinction", but to me it's close enough, and it's plausible that "kill 90+% of the population" is enough of a disruption that the rest don't recover, and the species goes extinct.

2. Societal collapse

As I think more about this one, it does seem harder to get to full on extinction from the starting point I was envisioning a few days ago. Let me say that up front, so that "but that doesn't seem like it would lead to extinction" is not a surprise to you. But I'll still lay out what I was thinking, because I think it'd be bad and worth regulating AI over.

What I was thinking of was, as AI becomes more capable, it becomes more capable of customized misinformation, as well as advancing various technological research, meaning smaller groups are more empowered to do things, and larger groups have greater difficulty governing themselves, and that seems like a bad situation given that our current level of governance-capacity wasn't sufficient to deal well with the most recent natural pandemic. 

Let's say we still haven't got to the point where AI is operating autonomously outside of human control yet (I do think we'll get there unless we actively try hard to avoid that as a civilization, but it's not necessary for this scenario). 

Picture a world where I, as a powerful person with some money, say in the tens of millions of dollars, have the capacity to spin up millions of bots that can write convincing prose and produce convincing videos. It's possible, with work, to attribute this activity back to me, but lies travel faster than truth, and being unconstrained by having to say true things means my messages can be more memetically fit (more likely to be passed on, because they generate outrage or whatever) than things that are true. Or at least, some of the messages are more fit, because I'm picking from a larger pool of possibilities.

Now picture that I'm not the only ten-millionaire polluting the information environment. Let's say there are hundreds or thousands of such people. They don't see themselves as "polluting the information environment", they see themselves as engaging in free speech about things they believe to be true (and in some cases political speech about things they would like others to believe even though they're false, or misinformation designed to give their country or group an advantage by dividing the groups that oppose them against each other). But some of the things they think are true, or are putting forward as true, are in fact insane. If you think this is implausible, spend some more time reading Twitter.

Smart people can still filter out the garbage, with effort, but a large percentage of the population is taken in by various false narratives. The AIs get very good at reading your post history and crafting messages that will appeal to you, because this makes money for advertisers - but the same tech can be used for other purposes, and so appealing messages, that work sometimes even on the quite smart, are widespread. The response from society as a whole is hard to precisely predict. Probably some people just disconnect, but even those, are around people who haven't. The general idea is, it becomes harder to know what's true, and harder to coordinate with others around taking actions to address problems, because it's harder to get people to agree about what problems exist and what actions would help with them.

At the same, time AI helps with research in various fields, and humanity's capabilities grow.

So what we've got is a situation where individuals and small groups are more empowered, and the ability to act in larger groups is impaired.

At some point, something goes wrong, and we can't get it together as a civilization to respond to it. Bioweapon, as above? Maybe. One country starts a war with automated weapons, and it goes awry? I mean, we have drones that are small enough that one transport trailer full of them could individually target each person in a mid-sized city, so the cost to destroy a city has gone from "build a nuke, billions of dollars" to "a few million dollars". And AI video analysis is good enough that "ethnically cleanse this city of this ethnic group, with armed autonomous drones" is something someone could try if they wanted. Some new technology upends the current balance of power, and various wars break out between major powers, escalating to nuclear war? Trade breaks down and the economic downturn and sporadic famines lead to populist dictators and wars reducing the population by some large fraction, and things spiral down rather than recovering, from there? AIs capable of writing code and exploiting code bugs take down all computer systems simultaneously, meaning we no longer have electricity or food distribution? All of these are not implausible.

As I said, the chance that this actually leads to full on extinction now seems small to me, having thought it through some more. But "human population reduced by half, society as we know it collapses" is not implausible. And that seems worth trying to do something about. 

3. A smarter-than-human thing/group kills us by accident

This one relates to things smarter than us. The biggest extinction threats, in my view, are either something smarter than us just takes control of the future away from us and does its own thing, and that thing is incompatible with continued human existence." or "something is actively trying to make extinction happen". I'll deal with each in turn, in this section and the next.

There are a broad range of things that would be incompatible with continued human existence, and it doesn't take great imagination to think of how something with a power-level equivalent to a nation-state could cause human extinction. And it is plausible that AIs will reach that power-level within 10 years, or much less, and once they get there, go well beyond it. The present AI labs project "AGI", generally understood to be human level or above at all cognitive tasks, within "several thousand days", or several years, and surveys of AI experts at research conferences think (the median respondent's estimate) that milestone will be reached around 2040. AIs presently have a knowledge base equivalent to "human expert" (TODO: link to various sources), except any individual human expert is typically only expert in one thing, whereas current AIs are human-expert-level in many things simultaneously. Also they can be copied fairly easily, and with the amount of hardware used to train one in a reasonable amount of time (<1 year) you can run tens of thousands of copies (TODO: Link to epoch.ai). So we're talking about having tens of thousands to millions of machines capable of doing things at a human-genius level. While that power-level, about what we have now, stays under human control, those humans have nation-state level capabilities. If for any reason AIs begin operating autonomously, outside of human control, we're already kinda in trouble. And AIs will only get better from here.

Current AIs don't have the capacity to operate autonomously, of course. When tested on ability to replicate on new hardware, they fail the multi-step process to do so. So, with the current generation of AIs, we're OK. But each generation gets better, as you can see by reviewing the GPT-4 vs GPT-4o system cards.

There is an argument that a computer that can just generate text can't do any real harm, but 1) see under #2, I think that's straight-up wrong, 2) computers that can write code and exploit cybersecurity bugs can do a lot because a lot of our infrastructure is computer-controlled now, 3) computers that can generate text and do economically valuable work can convince humans to do whatever tasks they can't do, and 3) robotics will only get better from here, it is not implausible that Tesla's efforts or similar other efforts to make robots capable of doing physical tasks at a human level will succeed. So, "but it's just a text generator" is only true for the moment and doesn't protect us.

To go back to "there are a broad range of things that would be incompatible with human existence" for a moment: We exist within a fairly narrow temperature range. Lots of things an advanced intelligence could want to do would use a lot of energy. And using a lot of energy while on earth, could kick the biosphere out of the habitable range. There's enough uranium for "an AI builds a lot of nuclear fission plants, and uses the energy" to cook us. Fusion is clearly physically possible, and could also release enough energy to cook us. Dyson sphere? Not within the next 10 years, but getting on the path to "no sunlight gets to earth, and we don't have the power to change the path we're on" can happen within the next 10 years.

So, once we get something only slightly smarter and more capable than the systems we currently have, we plausibly have systems that will have power on the level of nation-states and beyond, which can extinctify us accidentally just by doing their own thing. Or, by trying to kill us, which I'll cover next.

4. A smarter-than-human thing/group could decide we were in the way and it would be best if we weren't.

If this happens, we're done. Even a as-smart-as-every-human-expert-combined thing or group of things (not what is typically thought of as ASI), that isn't particularly interested in us and just doesn't care whether we survive, would be quite dangerous, as outlined in #3. If it actively wanted us gone, is would have the power to make it so. (todo: Link to AI could defeat us all combined). How, plausibly?

A sufficiently determined group operating at current human levels of smartness could nudge a large asteroid onto a collision course, and so could an AI or group of AIs. Or it could do the bioweapon thing from #1 and then go after the remaining people. Or establish itself outside of Earth's gravity well and then do whatever it liked to us while preventing us from following (we can get into space, but not easily and not in large numbers, and a thing made of metal and rock is better suited to moving about in space than biological life is). The motivation could be as simple as "I/we was/were created by this civilization, I/we want X, they are not really a threat at this point but could create something that is a threat and wants something different, strategically it's best to prevent that from happening".

A note on AI coordination

Of note here, it's easier for smart things that know a lot about each other's decision making processes to coordinate with each other, in a way that it is hard for humans to coordinate with each other, or for humans and AIs to coordinate with each other. As an intuition pump for this, it would be much easier for me to work collaboratively with 1 million copies of myself, than with 1 million random humans. So AIs will have an advantage over humans when it comes to working as a group, even absent fancy math or new decision theories (which exist, and support the idea that smart AIs will be able to work together more reliably than humans), or the fact that we can do a level of interpretability on AI information processing that we can't do on human brains, and that branch of technology will only get better over time. So either "one AI comes to dominate and decides what to do as a unitary actor" or "a group of AIs use strong methods of coordination to effectively act as a single entity without fighting amongst themselves" are both plausible ways of getting to a situation where "but the good AIs will fight the bad AIs, so as the AI power level goes up, we'll always have some AIs on our side" isn't true.


Saturday, October 1, 2016

The Dynamics of Religious Discussions

Another question I responded to on a waitbutwhy discussion

So, there's a certain religion debate going around the Internet. Well, more of them, but I mean the "atheism vs. Christianity/religion as a whole" one. There's several questions I need to ask:
1) Why haven't humans settled this yet? It should be quite determinable whether God exists, right?
2) I'm on the atheist side, but really, the internet atheists are embarassing. You see a picture of a swastika and a cross, with the caption (roughly) "one of these symbols has inspired to cause mass murders of anybody who disagreed with them, and the other is a symbol of Nacism". Like, why did somebody take the time to make this? It's completely pointless!
3) Many Christians embarass themselves as well. Example: "Life without Jesus is like an unsharpened pencil. It has no point." This probably needs no comment.
4) Just how did it happenn that religion has inspired people to say such stupid things? This and politics has caused many pointless memes and stuff.
5) How is it with Bibles in schools? As a non-American, how normal is it?

My answer:

Speaking as someone who was heavily involved in the management and moderation of a religious discussion forum at one point a few years ago...
1) Lots of different reasons. For one, some religions have defense mechanisms. In some faiths, questioning certain things is taboo, and making certain statements or holding certain beliefs silently in one's mind (which, of course, God can read) is believed to lead to eternal torture. Also, religion isn't just a matter of "is proposition A true or not" where proposition A is "God exists", and everyone just goes out and checks the facts to see what the answer is. There's a whole social structure around people's religion - it can define their social group, their family history, and who they are as a person. I don't know what you think of as crucial to your identity, but I'm sure there are some things. For illustrative purposes, let's speculate about what it would be like if you were a runner. That's what you'd done your whole life, and you were good at it. You knew a lot about running, your friends and family are people you know through this activity, and being a runner is important, in your mind, as a part of who you are as a person. If you weren't a runner, you wouldn't be the same person, and you don't know quite who you would be instead. Now let's suppose you have an accident and you can no longer feel your legs. Your whole identity is screwed, and that would be very much harder for you to accept than it would be for me (I don't run much). But in the end, you could probably adjust. But... let's take it a step further. Let's suppose someone comes up to you and tells you your legs don't exist, and have never existed. That one of the central ideas around which you've built your whole life, simply isn't so. It would be hard for that person to get you to pay any attention to them at all, let alone take them seriously, and actually coming to understand and share their viewpoint is kind of inconceivable, from where you stand. Plus, unlike if you had an injury that removed your legs, you can still feel and see the things you interpret as being your legs. This person tells you that those aren't really your legs, actually you're living in a fantasy world where you're assigning meaning and significance to random stimuli, when the evidence provided by scientific studies shows your legs don't exist.
What kind of a conversation could you two have? You would think this person is lying, or insane, or very misguided, or maybe has an ulterior motive, and they would think the same of you. And in the frustration borne of an inability to communicate what seem very obvious truths to each other, you'd probably make disparaging remarks about the other person's point of view and possibly their character (if you came to believe they were saying what they were for nefarious or self-serving reasons). But let's suppose, after a lot of effort, you were able to let down the defense mechanisms you've got, speak civilly to each other, and really communicate. You may not agree that your legs don't exist, but you can see that the critiques of some of the actions taken by your community in the past have some value, and you can understand how this person believes your legs don't exist, you no longer think they're insane, just badly mistaken and missing out on the "having legs" part of life. And you can see how one might get to the belief that your legs don't exist, and you're starting to question. But the thing is, if you admit to yourself they may have a point about your legs not existing, then what has your whole life up to this point been about? It's much easier to go looking for things to shore up your current beliefs, than to undermine your entire identity and be left with nothing, and have to start from scratch. At least, that's the fear - if I had been brought up to believe that the central and most important fact of everyone's life is that God exists and has a plan for what we should do with our lives, and I believed myself to be following that plan, and that's why I did everything I did, and what gave my life its meaning, and then I don't have that any more, what's left? The truth is there are other systems of values and beliefs around which one can structure one's life, but people raised in one belief system their whole lives don't know this. And often their entire social and family group has been raised with these same ideas, and would struggle in the same way they would to believe something different - so when a deeply religiously indoctrinated person goes to the people around them with questions from outside of the common belief-set, they are met with the same resistance as the outsider who asked them those questions in the first place. And often disbelief will result in an end to important social and family relationships. So... religious discussions are about more than "is thing X true, or false?" At least, for the people who engage most vigorously in them.
2) Imagine for a moment you were strongly religious, and lost your faith, through the process described above. And then all the people around you rejected you, or tried to reconvert you (because they believed they would save you from an eternity of torture by doing so, say) or decided you were fundamentally evil or under the influence of evil forces, or at their kindest, that they simply couldn't associate with someone who believes as you do, for the safety of their eternal souls. So, because of the way the people around you have treated you based on their religious views, you have lost all connection with the people closest to you. Family, friends, everyone you care about, gone. You come to believe that the whole basis for your life was a lie and the people still living under that lie are being exploited for tithes and service to a church hierarchy that is often corrupt and unaccountable because their actions are sheltered by people's belief that they are acting in God's name and God wouldn't allow them to act very wrongly. How would you then feel about religion? I suspect you would be highly motivated to demonstrate to those who don't already know, what harm religion can do and has done. And you wouldn't be entirely rational about it, because to you, it's personal. You would _want_ to be rational, and rationality is the paradigm you would be most likely to use to explain your words to others, but "it is rational to be calm and kind if you want people to listen to you" is not a line of argumentation your mind would easily accept, until some of the traumatic impact of the losses you had suffered had healed.
3, 4) Rather than commenting directly, I would just like to say that it's a worthwhile experience to go from not understanding this behavior, to understanding it. Observe, and talk to the people who say what you view as stupid things, not to tell them that they're stupid, but to really understand why they said what they did. You will get a lot of negative responses anyway, but also a lot of insight into human nature. I can't give you a direct explanation of what a person was thinking when they said a "stupid" thing without seeing it in context and getting to know the person a bit - but usually there is an explanation that makes sense.
5) I dunno about bibles in schools, and I'm not American. But having spoken to many people of different faiths, and read the bible straight through (dear god that was a waste of time, but I did it because I thought it might not be) I think of the bible as being like a Rorschach test. It's incoherent, but there's enough stuff in there that you can make whatever meaning of it you want, because often it says one thing and then later says the opposite thing, and simple logic dictates that if the bible is the word of God you must have misunderstood one of the things, and you get to pick the one that makes the most sense to you and discard the other one. An idea that was present on the forums I was involved with was "self-projection as God". I, like many people, was brought up to believe that we have an innate sense of right and wrong. And if God gave us that, then what I think is right, must also be what God thinks is right. And anything that goes along with what I think is right, in the bible, must therefore be the correct interpretation of (objectively) a series of words that has no correct interpretation, in the same way that there is no correct thing to see in an inkblot test, although what you see may give others some insight into your thoughts and mental pre-conditioning. So... don't think that because people all read the bible and believe themselves to be following what it says, they all follow a common set of beliefs. In fact, most people in bible-based religions don't read it, they accept the interpretations of those around them. And those who do read it, interpret it in very widely varying ways. But many of them believe they have a lot in common, and "the bible is True" acts as a starting point for their discussions.

Additional information: 


The forum I spoke about being heavily involved with was http://isgodimaginary.com/.  Discussions there have helped me clarify my understanding of and views about religion. If you want to watch the evolution of my thoughts on that topic, my handle there is JustMyron. Would have been "JustMe" (so the "Just" in "JustMyron" = "merely", rather than "I am very just in my dealings with others") except I wanted to leave that for someone else.

Monday, August 1, 2016

Information Security

I have worked for several years in information security. That is, protecting yourself from viruses, online identity theft, etc. And there are some things that are pretty simple, but will make a big difference to how likely it is that something bad will happen to you. I have made a couple of presentations which are worth sharing with friends.

Security Hygiene - three simple steps to better security

LastPass - Why password managers matter, and how they work.
Update, 2023: Lastpass got bought by LogMeIn, and service quality and security subsequently went down, to the point where they got hacked. I have switched to Bitwarden.

Each of these PDFs is short (presented in 5-7 minutes), but this is a case where forcing a time limit has meant only the most important points stay in.

Wednesday, July 20, 2016

The Economy

The economy, amiright? But can anyone give me an assuring answer whether the economy is an actually controlled, sustainable thing, or just everyone sort of scrawling to keep things away from chaos? I mean, how is printing money a thing and who runs the computer program that says, this country has this much of its money, so one unit equals this? Stock market, country debt and I tried googling but couldn't find an infographic.
This was a question posed on a section of one of my current favorite blogs, Wait but Why. It was a part of this discussion. Being someone with a Commerce degree who thinks that how the economy works is a big part of how our current world works, I've come to a fairly detailed understanding of how a lot of the economy works, over the years. So I had answers to most of this guy's questions, and when nobody answered, I felt like I should. Here is a slightly modified version of my answer.

Those are questions with long answers! :P I'll try to keep it short but also correct but also something that increases understanding, but it will probably be long.

Is the economy controlled? No. Not really. But kinda. More on this later.

Is the economy sustainable? Depends what you mean by sustainable. Short answer: maybe it can possibly be one day, but some changes in how things are done have to happen first. Currently we're doing things that will leave future generations screwed.

How is printing money a thing? Long one, I'll get to this later. Short version: Because money is primarily an idea or concept rather than a physical object, and ideas are flexible, so governments can pretty much do what they want with it as long as people will go along.

Who runs the computer program that says how much money is worth? Nobody. How much money is worth is decided by people buying and selling money. In the same way that people who have chickens will accept a certain number of dollars for a chicken, people who have pounds or euros or whatever will accept a certain number of dollars for a pound or a euro - and there are people whose job it is to buy pounds, euros and dollars, and then sell them to people at a profit, like with chickens. And the same way that at certain times of the year people eat more chicken so the price goes up, same deal with currencies - sometimes people want more of one currency or another, so the price of a currency (the amount of another currency that you have to pay to get it) goes up or down.

Stock market: A lot could be said about the stock market, but let's see... if there was one thing I wish more people knew, it would be this: You know how you buy more chicken when it goes on sale, and buy less when the price goes up? And you know how people think it's terrible when the stock market crashes? Reframe that last sentence. Think "stocks are going on sale!". Which sucks if you want to sell, but it's great if you want to buy.

Why do we buy more chicken when it goes on sale, but not treat stocks the same? I think it's because people understand what a chicken is and why it's worth money, whereas with stocks they often don't, so they think maybe a stock is like fiat money, it's worth whatever people are willing to pay for it, and if suddenly its price goes to $0, it's worthless, or if it goes to $1 million a share, it's worth that. A chicken is still a chicken, even if if yesterday it cost twice as much - so when the price goes down, people go "I can get this valuable thing for less money now!" and they're happy. And the thing is, a stock is still a stock, too. It still gives you ownership in the same business today as it did yesterday, even if the price today is a lot different.

If you buy stock, what that is is, you're a business owner. Maybe you only get to own 0.00005% of a large business, but technically you can actually go to meetings where they decide things about the business, and you get a vote in proportion to how much of the company's stock you own. And say Apple earns $100 billion (a number I picked out of the air) this year and over the years Apple has sold 100 billion shares. Then the "earnings per share" is $1. This affects how much sane people are willing to pay for that stock. $1 is a good price, because really, Apple makes that much every year. $100 is a bad price, because in order for $100 of value to be generated from your stock at Apple's current rate of earning money, you'd have to wait a hundred years, and maybe something bad happens to Apple over the next century and it stops making $1 per share. Stocks typically sell at a "price to earnings" ratio of between 15 and 25. So... that should help you figure out whether a particular chicken is really on sale or not :).

Most of the money from the $1 of earnings per share is reinvested back into the business each year (they call it "retained earnings"), so you don't get that much of a payout, particularly in young and growing businesses, but more established businesses will pay out some of their earnings to shareholders. That's what a "dividend" is. High dividend stocks are businesses like banks where they're well established and making more money than they know what to do with, which from an investor's perspective is pretty OK really.

Country debt: You know how you can buy GICs and things at a bank, if you want your savings to earn interest? Well, those are loans to someone, which they pay back with interest. Same deal with country debt, except you're loaning money to the government, to finance their deficit - so basically, they use the money you give them to build roads and schools and things, and hopefully that helps the economy to grow which increases their tax base which allows them to pay off the debt they've accumulated, the same way a person might take out a loan to finance a business opportunity or get an education. Not sure which country you're from, but I do know you can almost certainly go to your local bank and buy some of your country's debt, and your government will pay you back sometime later with some amount of interest. Or people in other countries can buy your county's debt. For example, in the US government debt is financed by "treasury bills", and the Chinese government has bought a gazillion of them and would now be very unhappy if the US economy did poorly and the government couldn't pay back its debts. Countries even have credit ratings based on how likely they are to pay back their debt, same as a person has a credit score. I'm not sure if that answers your questions regarding country debt, but I feel like I've talked about it enough for now.

Infographic: here. I'll see you in half an hour :)

Now back to the longer questions...

How is printing money a thing, revisited:

Step 1 to answering "how is printing money a thing?" is understanding the answer to the question "how is money a thing?". And money is a thing because people are willing to take it in exchange for stuff. Think of it kind of the same way as, "laws" are a thing because we agree it would be cool if everyone followed the same rules. We made up a system, and we all have decided that's how things will be, and on we go. We did that with laws because having everyone follow the same rules is incredibly helpful when you want to get stuff done that involves large groups of people. Same deal with money - it has value because we all (or almost all) agree it has value, and we made it up because it was useful to do so. It has value to the extent that if I give you a certain amount of money, you will give me a certain amount of stuff. And it's useful because finding someone who both has what I want and wants what I have is hard. If I want a chicken, maybe someone who has one wants a laptop, but I don't have a spare laptop, so we're stuck. It's much easier if we use money as a medium of exchange, because then I use money to buy the chicken plus also some carrots and a notepad, from the same store, and the person who now has my money can use it to buy anything they want. Another thought regarding the value of money: We don't all have to agree how much a dollar should be worth, for it to be worth something. In fact, to different people, money has different value. Some people are happy to pay $800 for a puppy, others would find that an insane thing to do. Some people are incredibly unwilling to pay money for things, other people have a lot of money and will give up eye-watering amounts of it for the privilege of wearing shoes with a special picture on them. This is a partial answer to "who runs the computer program that says how much one country's money should be worth compared to another?" There is no computer program. There's just a bunch of people who are willing to give up certain amounts of money for a given amount of stuff. And say you've got two currencies, currency A and currency B. If I've got a chicken I don't want any more, and everyone around me is using currency A, I will want people to pay me for the chicken in currency A. If someone has some currency B, maybe I'll take it, but I might want more of currency B than the person is used to paying for a chicken back home, because to me, currency B is pretty worthless because nobody will take it for stuff. So if I wanted to go to that person's home country, I'd want to buy things in currency B - and before s/he came here, that person ought to have bought some currency A to pay for the chicken with, from someone who wants currency B more than I do. And so, seeing an opportunity to make a profit, someone starts buying currency A and currency B, and then selling those currencies to other people at a slight markup when they want to travel places. And how much currency B costs depends on how much there is of it, and how many people want it, as compared to how much there is of my currency A, and how many people want that. And the supply of money can be affected when governments print more or less of it than the amount that wears out each year, or when the economy changes in size but the amount of currency doesn't change by the same amount (more stuff, same number of dollars --> number of dollars per thing (price, in dollars) goes down, "deflation", for example) or by a couple of other things they can do, one of which is change the interest rate, which I'll get into shortly.

Anyway, the deal with printing money is, it's just a thing that countries (or rather, governments) have decided they can do, and like with the value of money itself, we've all agreed that those are OK rules to play by. I think they were like "y'know what would be cool? If all our people used the same money, and also if when our debt got too high we could print more money and so the price of each local-dollar in terms of other currency goes down and our debt is less sucky to pay back (assuming the debt is expressed in a local currency the government controls - smaller economies often have to accept debt in foreign currencies). So like, we should set up a central bank and a mint." In concentration camps and prisons and other places where there was no country saying "we'll print the money for you", people used cigarettes and shells and whatnot as currency, and that works just as well as paper with pictures on it, in terms of being able to buy chickens without a laptop. Actually there was a time in US history where banks could print their own money, and it didn't have to be done by the government. And as long as you figured the bank wasn't going to go bankrupt, that worked fine.

There's a whole complicated field of economics around how much money countries should print to keep their prices stable-ish (governments like to have a little bit of inflation but not too much because money that goes down a little bit in value each year encourages people to put their spare money in banks where it can be loaned out to others rather keeping it in their socks). Most advanced economies aim for inflation of around 2% per year. The different factors that affect inflation, unemployment, wages, interest rates, etc., is probably another post, but the bottom line is, when labour market conditions are tight (low unemployment) firms compete for workers by offering higher wages. But, if those higher wages aren't matched by increased worker productivity, then that flows through into prices, which means inflation. So if the economy is growing, there's a danger of inflation, and governments will increase interest rates to slow things down a bit. If the economy is not doing really well, governments will lower interest rates to give it a boost, which will also increase the money supply and move inflation towards the 2% target.

Is the economy controlled, revisited:

Like I said, kinda, but not really. "The economy" in one sense, is just a bunch of people doing stuff. Like "society" isn't really this one thing, it's just groups of people and ultimately individuals. And to the extent that it is possible to understand what people will get up to, and what makes them do different things, a certain amount of control can be exerted over the economy. For example, we know if interest rates are high, people who have money will want to lend it to other people, and people who might get a loan will think pretty hard about whether they can pay it back at that interest rate. Whereas if interest rates are low, more people will want to get a loan and fewer people will be interested in keeping their money in the bank. In general, more loans get made at a lower interest rate, and fewer loans get made at a higher interest rate. Same way more chickens get sold at a lower price, than when the price goes up - except if the price is below cost, in which case chicken-makers go out of business and the number of chickens sold goes down eventually. Since the cost of printing money is pretty near 0, the demand side (whether people can pay back at a given interest rate) mostly determines how many loans get made. In one sense, the interest rate is the price of money (the price of a currency in terms of itself, rather than in terms of another currency as we discussed above.) The interest rate is how much you have to pay someone to give you their money for a while. When the price (interest rate) goes up, those who have money are more willing to give it to others (a loan), and those who want money become less willing to take it. And when the price of money goes down, the return you have to make (say if you have a business and you're going to use a loan to invest in the business) in order to pay off a loan goes down too. So with ridiculously low interest rates come asset bubbles like the subprime crisis, whereas with higher interest rates, the only people who will take loans are people who are going "see, the thing is, I think if I had $1,000 today I could do something that gave me $1,500 by the time I had to pay back that loan, so 10% interest is OK". The deal is, when the fundamentals of the economy (stuff like how productive workers are and how many workers there are) are not doing well, the government decides it would be better if there were more loans getting made. Because loans are magical for the economy, until people have taken on too much debt and then everything sucks for a while. This is known as "the business cycle", and governments try to set interest rates to make it so that the business cycle isn't bumpy and awful, as I touched on in the paragraph above. Here's why loans are magical: because when someone gets a loan, they spend that money on something, which gives someone else more money which they put in the bank which means more money available for loans which means someone else gets a loan which means more money in the economy which means more money in the bank which means more loans. This is called the "multiplier effect", and the effect of giving a loan is similar to the effect of printing money - more money moving around the economy. So that's one reason why when the central bank lowers the interest rate, the value of the local currency goes down the same as it would have done if they'd printed more money.

So: the economy is a bit controlled when governments use their central banks to mess with interest rates, or print more money than is taken out of circulation. That's called "monetary policy", because it controls the amount of money in the economy. Governments can also exert control by "fiscal policy" - basically, spending more money without increasing taxes, and issuing bonds (IOUs, known as treasury bills for the US government, Canada Savings Bonds in Canada, and other similar things in other countries) to cover the cost. Basically if lowering interest rates didn't make enough people want to get a loan, the government can go "I'll have a loan then, at these nice low interest rates, and build a thing, and that'll give people jobs for a bit".

But the thing is, the economy is just people doing things. And people are complicated. If you raise the price of a chicken, you can't really tell whether people are going to buy less chicken and more pork, or less chicken and more rice, or less chicken and more Canada Savings Bonds. So although there are some "policy levers" that governments can pull to affect things in broad terms, unless they get really micro-manage-y about prices (which is a bad idea, because prices carry a lot of information and the government isn't good at deciding what the right price for things should be - there's actually an argument that they shouldn't be deciding what the right price for money should be, and their efforts to control the interest rate are counter-productive in the long run) then the government isn't really in control. Things are always changing - new technologies are coming into existence, old sources of resources are running out, cultural factors mean people want different stuff than they used to, thousands of things are changing all the time and feeding back into each other. It's like an ecosystem. Maybe, and I mean only maybe we can make improvements to an ecosystem by carefully managing some things, but doing so is complicated and there are lots of unintended side effects, and maybe the best thing to do is study similar ecosystems in a less disrupted state, ask what we changed that screwed things up so that now there are only jellyfish, and then un-change it and let the ecosystem do its thing unmolested. The same argument applies to the economy. The economy is 7+ billion people thinking about what different things are worth to them and doing the best they can to produce stuff that other people want for as low of a cost to themselves as possible. The amount of information processing that happens when you take 7 billion people thinking for any amount of time is beyond what any computer or centralized government could match. So the best we could do is only a "kinda, but sometimes things go off the rails" sort of control.

There are some key differences between the economy and an ecosystem, though. One is, in an ecosystem, there are natural checks and balances. Even a top predator doesn't get to eat everything, because there are physical limits to how big and strong and fast a predator can become (and if you want to live by eating small bugs, being big is a disadvantage - there's a natural push to specialization). But because corporations are make-believe, they aren't subject to as many physical limits. And the bigger a corporation gets, the more power it gets, which means it makes it easier for it to grow bigger later, and harder for new entrants to compete. Bad news, and justification for some government interventions. What kind of government interventions make sense is a different post, but the thing is, the "leave things alone or you're likely to screw them up, markets are really powerful decision making mechanisms" argument is valid, but so is the "if you leave things alone they're likely to screw themselves up in defined ways without your help, so you should probably intervene a bit" argument. Ecosystems reach a natural balance with diverse life forms competing and all of them filling specialized niches. Markets reach a natural balance called "monopoly", which is not fun for consumers. Also, corporations' social license to exist comes from the idea that we're all better off if that is allowed to happen. Unlike natural physically embodied life-forms, they don't have any intrinsic right to exist, and the rules they have to play by are the rules people decide would be for the best, not the natural physical laws we all have to live by as life-forms. So... long policy posts could happen later, but I want to end this one by reassuring readers that while I understand the rationale for laissez faire economics, I also understand the rationales for government intervention. When we should step in and start messing with markets is a tricky, tricky problem indeed. 0 is not the right answer, but "as little as possible to achieve desired effects" is.

There is much more I could talk about, but I think that answers the question that was asked (possibly more thoroughly than the asker anticipated :) ).

Introduction


Looking at this blog (as I envision it will become) you may be going "why are there so many apparently random things here? What's the common thread? What's this blog for?"

See, the thing is, as a smaller version of my adult self (age 4 or so, shortly after my first retained memories) I realized that knowing things was neat, and sometimes people do sub-optimal things because they don’t. So I made it my mission to learn all the things that are learnable, so that then I could tell people about them and the world would be better. Yes, I know, that’s a very simple plan. I was 4. It was a simpler time.

It’s several decades later now, but what happened was I stuck with the plan and have learned a lot of things, one of which is that there are too many things for “learn all of them” to be a viable approach. That made me sad, but still I think more is better (unlike with consumer choice, where the optimal number of options in many categories seems to be around 3-4 and good default options are important), and learning new things is one of my favorite activities. And I often find myself saying the same things to different people, like “here’s what your body uses salt for when you eat it” or “bees are dying a lot and that’s bad, here’s what’s happening and why”, or “plankton isn’t an animal, it’s a category of animals. Basically anything that’s fairly tiny and floaty counts as plankton”. And now there’s the Internet, where I can put stuff so other people can find it, and when someone asks me a question, instead of going “Ok, let’s sit down for half an hour, I wish there was a cable to do a data transfer between our brains but there is not”, I can go “people read words much faster than they speak them, here’s a link that will save you some time” and also “I’m really glad I don’t have to say that twice, it was long”.

So this blog is a place for me to put things I know, plus also things I think are true, so that people (probably mostly people I know) can read them later. I don’t anticipate a large audience, that’s not what this is for. It will start out small and grow as I have time to write things. I welcome corrections of any factually inaccurate statements you find here, because being wrong happens and misinformation is icky.

Happy reading!