Before the articles, I was convinced that everyone should be taught to code as early as possible. Now I'm less sure.
I'm convinced that some form of coding is indeed the new literacy. Before I thought that would look like everyone learning Python, at least. Now I'm not so sure. On some level, it's empowering to be able to have a problem and just be able to code it up. It's also empowering to know what problem can be quickly solved, and what's near impossible.
For example, I wanted to convert a picture's color into more basic colors. (Like a complicated PNG photo with thousands of different colors to a simpler one with only 10 specific shades of blue). I knew I could do that, and did. But if I wanted to, say, identify different types of trees in a picture of a forest, I would never try to solve that on my own because I realize that it's an incredibly difficult problem.
These projects won't be necessary to survive (like reading/writing), but they made life much easier (like reading/writing). I feel like the main part of coding literacy is understanding what problems can be solved with computer science and what ones are better off being done manually. An assembly line worker may do repetitive calculations or motions that he could better solve with an algorithm, for example.
So in that sense, coding could make life much better in the same way that reading and writing did. However, learning a specific language is going to get bogged down in the details quickly. In that sense, I think there should be some form of computational thinking taught.
The main benefits of introducing everyone to Computer Science is that more and more work is going to be automated away, and Computer Science jobs are only on the rise. At this point, it seems that there is more CS jobs than applicants (although it doesn't feel that way listening to my peers). Overall, it is a valuable skill that not enough people are aware of and would be good at, so it should be taught at lower levels.
The main challenges are lack of teachers and how exactly to teach it. No one wants to teach it because schoolteacher doesn't meet the move-fast-break-things ideology and doesn't pay as well as the industry does. Also, no one knows how to teach it; one article noted that LOGO didn't really catch on in the 80s, but we're still trying it now. Also, some students struggle with diving straight into a language; the semicolons and odd rules through them off, or scare them outright.
How does this fit into K-12 education, though? What exactly should be taught? What should be dropped? I think that foreign languages should be dropped as a requirement and replaced with some form of computer classes. It can start early, as early as Kindergarten if need be, with really simple "computational thinking" style stuff. Maybe just literal building blocks being put together, like real-world Tetris. Or some form of challenge with Legos. Or even a cooking class; one article draws strong parallels between cooking and Computational thinking. I do think that, by high school, kids should be working with real languages, but we don't have to start with that. Start with the baby languages, or whatever the research thinks is best. For example, algebra is important to know in high school, but don't teach it to 1st graders. Teach them the basics they'll need later, like multiplication.
I learned straight Java, and it worked really well for me. And thinking back to those classes, I realize that I've forgotten just how long it took me to internalize all the syntax. Even
if (statement)
{
Do this stuff
}
But if statement is false skip here and do this stuff
took a while to really internalize. For loops took forever to figure out, but now I can read them almost like I can read English. But basically: learning a real language is hard in ways that experienced coders forget, so maybe we shouldn't teach it right off the bat. However, I think the thinking behind the code is very important; it's almost basic problem solving. The point of this side note is that coding can be difficult, and while it worked for me, I would be willing to rescind my "everyone should be getting to a real language by high school" statement if research showed otherwise.
I think anyone can learn to program. Again, it just sorta clicks with me, so it's kind of difficult to understand otherwise. I tend to think that with good enough instruction, started early enough, just about anyone could learn to do just about anything. And I think everyone should learn some version of code, even if it is just a "Problem Solving" class rather than a strict coding one.
Ethics Blog
Sunday, April 23, 2017
Patents and Such
It's pretty mean to assign stuff over Easter Break. I think I'll take the zero as one of my two dropped blogs. Maybe I'll edit this later...
Sunday, April 9, 2017
Autos!
I have two CGP Grey videos to recommend: the one from last time, Humans Need Not Apply, which mentions self-driving cars, and The Simple Solution To Traffic, which is about self driving cars in the end.
So, self-driving cars! (Also "Autos" as CGP Grey likes to call them). Why am I so excited about them? Safety and convenience. Driving (or being a passenger) is the most dangerous thing I do. And it's hard to imagine more convenient travel.
Why might they be a bad idea? If it turns out they actually aren't safe, and maybe them taking away too many jobs.
So first, safety. After reading the articles, I'm less sure of their current safety. (I was 2,000% convinced they were as is better than humans, now I'm 90% sure we'll get there soon). The first warning sign was from the first Tesla Article. They boast about new hardware going into the Tesla 3s but they include this worrying line “Teslas with new hardware will temporarily lack certain features currently available on Teslas with first-generation Autopilot hardware, including some standard safety features.” Why...? They mention the need for "more robust validation" for things like "automatic emergency braking, collision warning, lane holding and active cruise control" which seems like everything an Auto should do. They'll eventually push these with an update, but...why not have them now? What's going wrong in testing?
The article that said car makers can't "Drive their way to safety" was interesting. It mentioned that with a fleet of 100 cars driving 24 hours a day, it would take twelve and a half years to get to 95% confidence that it was better than humans at driving. This does make current claims about Autos' safety more dubious, but I don't really see why that fleet can't increase to 1,000 and bring the testing years required down to one or two. (Is that how statistics work?). Basically, we may not be sure now, which surprises me, but we will be certain in the near future.
However, there are good reasons to think that cars are currently better drivers than humans. These Autos can see 360 degrees around themselves, which humans never could. Also, they cannot get sleepy or drunk, which is incredibly important. In America, nearly 10,000 people die every year in alcohol related crashes, which is a third of car crashes overall! Obviously, Autos cannot get drunk.
There are a few pieces that worry me, though. The first major one is how much computers still struggle with object recognition. It's not impressive for something that needs to distinguish pedestrians, animals, and random plastic bags in the blink of an eye. And radar/lidar can only do so much in that regard. This article mentioned that Uber had trouble on bridges. The reasoning for that were worrying for the scalability of the tech. Uber cars rely on heavily detailed maps of a specific area, including everything from buildings to parked cars. On bridges, those landmarks don't exist, and the car isn't confident enough to drive itself. This seems like a major issue, as mapping the entire country in that much detail, and keeping it constantly up-to-date, seems like a major task.
Still, every single Auto learns from the experiences of every other Auto. The crash that caused the Tesla fatality will never happen again, whereas humans are doomed to make the same (easily avoidable) mistakes over and over again. And computer tech is explosive. If we're near viability now, next decade's cars will be better than we can imagine.
As far as automatic cars taking everyone's jobs away, I'll just say that I'm not a Luddite and leave it at that.
Now for the less interesting question of the "social dilemma of autonomous vehicles." Does a car save the driver, or go for the greater good of humanity? In the impossibly rare case where a car has to chose between killing the driver and killing pedestrians, what does it do? (Assume random circumstances made neither the pedestrians nor the car at fault). I would say kill the driver. How do you go to those families and say, "Yah, they could have been alive, but I didn't want that, and had my car sacrifice them." But that's all I want to say on the matter. I think this question eats up way too much of the discussion about Autos, and isn't worth discussing, frankly, because it's so rare.
The real question is, once we prove that Autos are safer than humans, do we allow humans to drive? And another interesting point: Autos don't have to be perfect, just significantly safer than us. For example, let's say Autos are twice as safe. That is, they would cause 15,000 deaths every year instead of humanity's 30,000. They cause 15,000 deaths because of faulty programming, or whatever. That's still much better than humans could ever do! Is it moral to let any humans drive when we have machines that are even that flawed? I don't think so. And that's the more interesting question. Not "Oh I don't want to drive in a car that will/won't put my life ahead of pedestrians" but "Do we allow humans to drive at all when we have machines that are twice as safe as them?" (To be clear, I think Autos will be much better than twice as safe as us, but I think this argument holds up even with that high of an error rate).
The "social dilemma" might save 4 lives in total if it kills the driver instead of the 5 pedestrians, but banning humans would save tens of thousands of lives.
Self driving cars will drastically impact many areas of everyday life. Socially, driving will be safer, easier, and probably cheaper. I would imagine fewer people will own cars, and more will simply use a self-driving taxi service. This would have a massive impact on the economy, un-employing literally millions of people. (The transportation industry is the largest employer in America...) I'm not sure how we deal with this, maybe UBI? But that's a discussion for Automation in general. Politically, I'm not sure. No party really seems to be rallying behind this stuff either way. If self driving cars eventually take too many jobs, I could see it becoming a fighting point, with every accident hailed as doom for the industry, but so far Autos seem to have broad political approval.
The government seems to be doing the right thing so far, which is allowing self-driving cars, but with reasonable safety measures, like a person ready to take over the wheel. (Reasonable until they are more fully proven safe). The only thing I can think of would be a federal law, rather than state-by-state randomness, but things are going ok for now.
Finally, would I want a self-driving car? Sort of. Do I want to be a passenger in one? Hell yes! Do I want to own one? Not really. I'd use a self-driving taxi service for all my needs. Why own an expensive asset and have it require space, insurance and maintenance only to spend 95% of its life unused?
Why might they be a bad idea? If it turns out they actually aren't safe, and maybe them taking away too many jobs.
So first, safety. After reading the articles, I'm less sure of their current safety. (I was 2,000% convinced they were as is better than humans, now I'm 90% sure we'll get there soon). The first warning sign was from the first Tesla Article. They boast about new hardware going into the Tesla 3s but they include this worrying line “Teslas with new hardware will temporarily lack certain features currently available on Teslas with first-generation Autopilot hardware, including some standard safety features.” Why...? They mention the need for "more robust validation" for things like "automatic emergency braking, collision warning, lane holding and active cruise control" which seems like everything an Auto should do. They'll eventually push these with an update, but...why not have them now? What's going wrong in testing?
The article that said car makers can't "Drive their way to safety" was interesting. It mentioned that with a fleet of 100 cars driving 24 hours a day, it would take twelve and a half years to get to 95% confidence that it was better than humans at driving. This does make current claims about Autos' safety more dubious, but I don't really see why that fleet can't increase to 1,000 and bring the testing years required down to one or two. (Is that how statistics work?). Basically, we may not be sure now, which surprises me, but we will be certain in the near future.
However, there are good reasons to think that cars are currently better drivers than humans. These Autos can see 360 degrees around themselves, which humans never could. Also, they cannot get sleepy or drunk, which is incredibly important. In America, nearly 10,000 people die every year in alcohol related crashes, which is a third of car crashes overall! Obviously, Autos cannot get drunk.
There are a few pieces that worry me, though. The first major one is how much computers still struggle with object recognition. It's not impressive for something that needs to distinguish pedestrians, animals, and random plastic bags in the blink of an eye. And radar/lidar can only do so much in that regard. This article mentioned that Uber had trouble on bridges. The reasoning for that were worrying for the scalability of the tech. Uber cars rely on heavily detailed maps of a specific area, including everything from buildings to parked cars. On bridges, those landmarks don't exist, and the car isn't confident enough to drive itself. This seems like a major issue, as mapping the entire country in that much detail, and keeping it constantly up-to-date, seems like a major task.
Still, every single Auto learns from the experiences of every other Auto. The crash that caused the Tesla fatality will never happen again, whereas humans are doomed to make the same (easily avoidable) mistakes over and over again. And computer tech is explosive. If we're near viability now, next decade's cars will be better than we can imagine.
As far as automatic cars taking everyone's jobs away, I'll just say that I'm not a Luddite and leave it at that.
Now for the less interesting question of the "social dilemma of autonomous vehicles." Does a car save the driver, or go for the greater good of humanity? In the impossibly rare case where a car has to chose between killing the driver and killing pedestrians, what does it do? (Assume random circumstances made neither the pedestrians nor the car at fault). I would say kill the driver. How do you go to those families and say, "Yah, they could have been alive, but I didn't want that, and had my car sacrifice them." But that's all I want to say on the matter. I think this question eats up way too much of the discussion about Autos, and isn't worth discussing, frankly, because it's so rare.
The real question is, once we prove that Autos are safer than humans, do we allow humans to drive? And another interesting point: Autos don't have to be perfect, just significantly safer than us. For example, let's say Autos are twice as safe. That is, they would cause 15,000 deaths every year instead of humanity's 30,000. They cause 15,000 deaths because of faulty programming, or whatever. That's still much better than humans could ever do! Is it moral to let any humans drive when we have machines that are even that flawed? I don't think so. And that's the more interesting question. Not "Oh I don't want to drive in a car that will/won't put my life ahead of pedestrians" but "Do we allow humans to drive at all when we have machines that are twice as safe as them?" (To be clear, I think Autos will be much better than twice as safe as us, but I think this argument holds up even with that high of an error rate).
The "social dilemma" might save 4 lives in total if it kills the driver instead of the 5 pedestrians, but banning humans would save tens of thousands of lives.
Self driving cars will drastically impact many areas of everyday life. Socially, driving will be safer, easier, and probably cheaper. I would imagine fewer people will own cars, and more will simply use a self-driving taxi service. This would have a massive impact on the economy, un-employing literally millions of people. (The transportation industry is the largest employer in America...) I'm not sure how we deal with this, maybe UBI? But that's a discussion for Automation in general. Politically, I'm not sure. No party really seems to be rallying behind this stuff either way. If self driving cars eventually take too many jobs, I could see it becoming a fighting point, with every accident hailed as doom for the industry, but so far Autos seem to have broad political approval.
The government seems to be doing the right thing so far, which is allowing self-driving cars, but with reasonable safety measures, like a person ready to take over the wheel. (Reasonable until they are more fully proven safe). The only thing I can think of would be a federal law, rather than state-by-state randomness, but things are going ok for now.
Finally, would I want a self-driving car? Sort of. Do I want to be a passenger in one? Hell yes! Do I want to own one? Not really. I'd use a self-driving taxi service for all my needs. Why own an expensive asset and have it require space, insurance and maintenance only to spend 95% of its life unused?
Sunday, April 2, 2017
Artificial Intelligence
First, this super long article:
http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html
(Control-f this to get to the bit I'll talk about: "So what ARE they worried about? I wrote a little story to show you")
The next one is from CGP Grey: https://www.youtube.com/watch?v=7Pq-S557XQU
Ok, so that's the additional resources out of the way. Now on to the actual blog. Let's start with: What is Artificial intelligence, anyways?
In popular culture, it tends to mean basically just robotic human intelligence. R2-D2, Hal, Daneel, Giskard, all robots that can think (more or less) like humans. However, in the Computer Science field, it really just means stuff that seems like it takes human-thinking to do. Stuff like play games, recognize images, creating meaningful sentences. Sure, a fully cognitive robot falls into this, but there are plenty of grades that still count as AI before we even approach Cylon status. Deep Blue is one of the first examples, and even standard video game enemies sort of count as AI.
So AI is technology that can do human-like stuff. But there is a huge range. AlphaGo, Deep Blue, and Watson prove the viability of artificial intelligence in that AI can take over many human jobs, very quickly. Meaning that AI is a viable technology that will have (and is already having) a drastic impact on our economy. (See also "Humans Need Not Apply" by CGP Grey, linked above). Self driving cars will make most transportation jobs obsolete within decades--and the transportation industry is the largest employer in America.
So AI is important and will have a huge impact. However, we don't seem to be anywhere near human-level AI. If all we care about is even C3P0 level AI, Watson and friends are definitely not a proof of viability.
But how do we find out when we get to that level? The Turing Test has been the go-to idea for a long time now. But is it viable? What is the Turing Test, anyways? Basically, you type to a computer, and you type to a human being. If you can't tell the difference between which is the computer and which is the machine, then the machine has passed the test. At this point, it's considered basically fully self aware. The reasoning goes that, if you can't tell the difference between a human and a machine, you should treat the machine as a human. I think this is a legitimate test. The temptation is to say that we created the machine, we wired it, all it is is transistors and code. But really, human brains are just a bunch of fleshy neurons. So if we grant personhood to the fleshy neurons, we should also grant it to the metal ones.
I am not at all swayed by the Chinese Room example. Mainly because, to actually encode every single sentence in English to every single sentence in Chinese is basically physically impossible. Every time we write one of these blog posts we come up with a bunch of sentences that have never been strung together before. The idea that the set of rules to correctly translate Chinese to English and vice versa would ever fit into a room is absurd. This may sound like pointless rambling, but it's important. The metaphor is supposed to make us think "It's absurd that some guy reading a book knows the language, therefore Turing Test is silly." But really, it's absurd that all that information could fit into a book.
Therefore, my response to this metaphor is that the guy-book-room combination really does know Chinese. This doesn't sound as absurd when you realize how ungodly, impossibly large the book must be.
Another response to the metaphor is: we have this book already. It's called Google Translate. Does Google Translate know Chinese? I would argue yes.
The concerns about Artificial Intelligence are hard to gauge. They are definitely going to impact our lives and cause mass unemployment. But will they gain human level intelligence, and if they do, will they be a danger to us? My opinion on this is that they probably won't gain our level of intelligence in my lifetime, but if they do they could very likely end mankind.
First: Why won't they gain our level of intelligence? The human brain is extremely complicated, and even the best neural networks are still stick-figure drawings of it, basically. Also, Moore's law only goes so far before the electrons start quantum-leaping. We struggle to model single proteins folding, I just don't see us getting to intelligence any time soon.
Second: Why do I think it's the end of the world if they get to our level? Once they reach us, they'll surpass us almost instantly. Because you can always just double the resources allocated to a computer to make it twice as fast. You can't really do that with humans. And we don't really know what Einstein * 2 looks like. It could be catastrophic.
But I don't even think that sort of end is likely, the end where "super-intelligent computer resents its fleshy overlords." I think another end is more likely, one outlined in the first article I linked to. That story has a company creating an AI to do make nice, handwritten-looking notes. Long story short, they eventually break company policy and hook it up to the internet. A few months later, all humans are dead, and the AI clones itself and gobbles the entire universe. Why? It uses all resources at its disposal to make thousands and thousands of pieces of paper. It utilizes every single atom it can get its hand on, and turns it into paper with nice little handwritten notes on them. Basically, the AI does exactly what we told it to, but in an unexpected way. Isn't that what every program you've ever written has done, anyways?
WikiLeaks Podcast Reflection
One of the main takeaways from our podcast is that organizations like Wikileaks should exist and that people who witness the government doing illegal things should use them. But governments should be able to act within their own laws and still have secrets from the public. Also, Wikileaks certainly makes it easier for government whistleblowers to get the word out, but the free press probably does a better job at releasing the documents because they at least try to redact things. Ignorance is definitely not bliss, but Wikileaks could use some discretion.
Their recent revelations about Vault 7 were very interesting. It seemed that they, for the first time, implemented a redaction scheme. We mentioned this in the podcast, but they replaced certain people's names and other sensitive information with codes. Also, what was revealed doesn't seem to have as big of an impact as I first thought. Most of the tech companies who were affected by the Vault 7 leak said that they were aware of the issues mentioned and had already pushed security patches. The infamous TV security hole that turns your Smart TV into a microphone requires physical access to the device. In fact, the CIA's actions outlined in the Vault didn't seem that extraordinary; one tech expert said that "It seems like the CIA was doing the same stuff cybersecurity researchers do." As we mentioned in the podcast, in this light this "Vault 7" kinda seems more like a popularity stunt by Wikileaks, not as deserving of national attention as it first appears to be.Another issue that came up during the podcast was whether or not you could separate the message from the messenger, and the messenger from its founder. Julian Assange is controversial character. He has claimed that "if an Afghan civilian helps coalition forces, he deserves to die." He has been accused of rape and has been holed up in the Ecuadorian embassy for years, and remains there to this day. Every discussion of Wikileaks involves him. Somewhat fairly, because he holds the captain's wheel of Wikileaks tightly. However, the press probably focus on him too much, and this gives Wikileaks a more negative connotation than it deserves.
As for sorting out the message from the messenger, this starts to make more of a difference. We asked in the podcast: "What makes Wikileaks different from a news corportation?" The answer to that seems to be that, if you are a whistleblower, they are easier to contact, more likely to accept your story, and more likely to guarantee your anonymity. Also, as noted before, they perform almost no redacting (at least until Vault 7). So this makes the message drastically different if it came from the Times or if it comes from Wikileaks. The Times message would have names and important information obscured, while Wikileaks won't, and even once published people's social security numbers.
Another item addressed in the podcast is whether or not whistleblowing is always ethical, and when to release data or whether secrets are necessary. In the podcast, I was leaning the furthest towards secrets are ok for a government to have. In that case, honesty is not always the right policy. However, if you notice the government acting unethically or outside of the law, then honesty and transparency are the best policies. So the government should not be forced to be entirely transparent, but when they misstep it is the duty of whoever notices to shine light upon them. So in this sense, transparency should be forced upon rule-breakers, but not the entire government in general; they should be allowed to classify certain documents in the name of national security.
Sunday, March 26, 2017
Net Neutrality
First off, I have to admit a bias here. I have researched this topic before, and actually wrote to the FCC to encourage them to classify ISPs as a telecommunications company under Title II of the Telecommunications act, mentioned in the "Net Neutrality: What You Need to Know Now" article.
So, what is net neutrality? The best analogy I've heard is to treat internet networks the same way roads are treated. For example, UPS, FedEx, etc. can't discriminate against certain packages. They can't charge Amazon more, they can't charge eBay sellers more; etc. Their only job is to move material from point A to point B, they can't discriminate between what they're moving. Similarly, ISPs can't charge Netflix more to use their internet pipes.
A final simplification: When you purchase internet service, you're just purchasing a pipe that serves out bits. You can't be charged more or less for whatever type of bits you get out of that pipe.
The argument for net neutrality is that without this, ISPs can do some ugly stuff. Comcast significantly slowed down Netflix to force them to pay, for example, before the new rules were passed. It's also not hard to imagine a world where you would have to bundle certain internet services as well. "Base internet access is $1,000,000 a month, the Netflix package is $2 billion and YouTube is $1 billion." They could force you to bundle internet coverage the same way they force you to bundle television channels. All without ever actually laying down any more cable.
The argument for net neutrality is that some unnecessary data (Netflix, etc) is overflowing the pipes and important data (healthcare, etc.) is getting slowed down. So the corporations should be able to charge Netflix in order to lay down new cable and build them a fast lane.
My thoughts: The argument against Net Neutrality seems totally bunk to me. If you have a pipe of size X, you can distribute that pipe to N people, then each person pays for a piece of the pipe. Equally distributing this pipe would give everyone in the neighborhood X/N bandwidth. That's a terrible system, but that's not what net neutrality says. The ISPs can divide their bandwidth however they want, based on who pays the most. So everyone pays a certain amount and gets a pipe of a certain size to fit their needs.
So if I'm at home trying to sign up for healthcare (or something else important) and the website is super slow because my brother is using all the bandwidth on Netflix, I don't think "curses, if only the ISP slowed down Netflix for me so that this wouldn't happen." I think "We need to buy a bigger internet pipe" or "Get off Netflix! I'm doing actually important stuff!"
In my opinion, there's no reason to artificially discourage use of Netflix by paying more for it. If I notice the internet slowing down, I'll buy a bigger pipe, which will give ISPs the money they need to lay down more cable. I don't want to be forced to purchase the "Netflix package" to actually enjoy a show.
How to implement it is a much more difficult question. I think the best way would be monitoring internet speeds for certain websites to make sure the ISPs don't slow down one particular site, and responding to any lawsuits. Also monitoring ISP deals and offers to ensure they don't create "The Netflix Package." I don't really care how much this "burdens" Comcast. I'll care once competition is restored, but without competition, someone has to fight the monopoly and enforce burdens.
I don't understand how Net Neutrality prevents innovation. If anything, it supports it. Without neutrality, ISPs could easily charge a fee to every website ever to avoid being massively slowed down. That would make creating a new website much more expensive. Currently, you can set up a website that serves bits as quickly as Google, Facebook, or any other giant. This has lead to a plethora of new and interesting independent sites. The loss of net neutrality could stifle that innovation.
Finally, whether "The Internet is a public service and access should be a basic right." This one is harder. We think that about roads, but roads aren't partially owned by corporations. Perhaps the best example is electricity? But is electricity a "basic right"? (A quick Google shows that electricity being a basic right is still somewhat debated). So on the spectrum of basic rights: Life, liberty, pursuit of happiness: Yup. Electricity: Probably? Taking it away from the country for a long period of time would kill people. Internet: Probably not? Taking it away wouldn't kill people or deprive them of basic rights...yet.
Should it be a public service, regulated like a utility like water or electricity? Probably, and I think it's becoming more necessary. Schools usually assume you have internet access and require it for assignments. Many services are going entirely online, like banking, flight and hotel booking, etc. Soon enough, I posit that internet access will be assumed, and not having a connection will severely impact your ability to function in society (manage finances, file taxes, book trips, buy anything). Therefore, letting corporations pick and choose which (legal) data can go through their Internet pipes sounds like a terrible idea.
So, what is net neutrality? The best analogy I've heard is to treat internet networks the same way roads are treated. For example, UPS, FedEx, etc. can't discriminate against certain packages. They can't charge Amazon more, they can't charge eBay sellers more; etc. Their only job is to move material from point A to point B, they can't discriminate between what they're moving. Similarly, ISPs can't charge Netflix more to use their internet pipes.
A final simplification: When you purchase internet service, you're just purchasing a pipe that serves out bits. You can't be charged more or less for whatever type of bits you get out of that pipe.
The argument for net neutrality is that without this, ISPs can do some ugly stuff. Comcast significantly slowed down Netflix to force them to pay, for example, before the new rules were passed. It's also not hard to imagine a world where you would have to bundle certain internet services as well. "Base internet access is $1,000,000 a month, the Netflix package is $2 billion and YouTube is $1 billion." They could force you to bundle internet coverage the same way they force you to bundle television channels. All without ever actually laying down any more cable.
The argument for net neutrality is that some unnecessary data (Netflix, etc) is overflowing the pipes and important data (healthcare, etc.) is getting slowed down. So the corporations should be able to charge Netflix in order to lay down new cable and build them a fast lane.
My thoughts: The argument against Net Neutrality seems totally bunk to me. If you have a pipe of size X, you can distribute that pipe to N people, then each person pays for a piece of the pipe. Equally distributing this pipe would give everyone in the neighborhood X/N bandwidth. That's a terrible system, but that's not what net neutrality says. The ISPs can divide their bandwidth however they want, based on who pays the most. So everyone pays a certain amount and gets a pipe of a certain size to fit their needs.
So if I'm at home trying to sign up for healthcare (or something else important) and the website is super slow because my brother is using all the bandwidth on Netflix, I don't think "curses, if only the ISP slowed down Netflix for me so that this wouldn't happen." I think "We need to buy a bigger internet pipe" or "Get off Netflix! I'm doing actually important stuff!"
In my opinion, there's no reason to artificially discourage use of Netflix by paying more for it. If I notice the internet slowing down, I'll buy a bigger pipe, which will give ISPs the money they need to lay down more cable. I don't want to be forced to purchase the "Netflix package" to actually enjoy a show.
How to implement it is a much more difficult question. I think the best way would be monitoring internet speeds for certain websites to make sure the ISPs don't slow down one particular site, and responding to any lawsuits. Also monitoring ISP deals and offers to ensure they don't create "The Netflix Package." I don't really care how much this "burdens" Comcast. I'll care once competition is restored, but without competition, someone has to fight the monopoly and enforce burdens.
I don't understand how Net Neutrality prevents innovation. If anything, it supports it. Without neutrality, ISPs could easily charge a fee to every website ever to avoid being massively slowed down. That would make creating a new website much more expensive. Currently, you can set up a website that serves bits as quickly as Google, Facebook, or any other giant. This has lead to a plethora of new and interesting independent sites. The loss of net neutrality could stifle that innovation.
Finally, whether "The Internet is a public service and access should be a basic right." This one is harder. We think that about roads, but roads aren't partially owned by corporations. Perhaps the best example is electricity? But is electricity a "basic right"? (A quick Google shows that electricity being a basic right is still somewhat debated). So on the spectrum of basic rights: Life, liberty, pursuit of happiness: Yup. Electricity: Probably? Taking it away from the country for a long period of time would kill people. Internet: Probably not? Taking it away wouldn't kill people or deprive them of basic rights...yet.
Should it be a public service, regulated like a utility like water or electricity? Probably, and I think it's becoming more necessary. Schools usually assume you have internet access and require it for assignments. Many services are going entirely online, like banking, flight and hotel booking, etc. Soon enough, I posit that internet access will be assumed, and not having a connection will severely impact your ability to function in society (manage finances, file taxes, book trips, buy anything). Therefore, letting corporations pick and choose which (legal) data can go through their Internet pipes sounds like a terrible idea.
Sunday, March 19, 2017
Corporate Personhood (and Sony)
Corporate personhood is the idea that a corporation is legally a person. It has (some) rights, it can be sued, etc. This is useful in many cases. As the "If Corporations Are People, They Should Act Like It" article points out, it means that the government can't just barge in and seize all of Google's servers, because Google is protected with the Fourth Amendment rights to be free of unreasonable searches and seizures.
There are other legal benefits of corporate personhood pointed out in that article. For example, if a corporation harms an individual or group of individuals, those individuals can sue the company directly. The company has a much bigger pot of money to dole out than all of its executives combined. Therefore, the plaintiffs actually stand a chance to recover what they've lost.
Recently, however, some more dubious rights have been awarded to corporations; namely the right to spend unlimited amounts of money on political campaigns. This has huge social and ethical impacts on society, because elections are now more than ever influenced by which ever groups have the most money. I think awarding this right of unlimited spending to companies (and individuals, for that matter) was a terrible mistake. However, this mistake is separate from the idea of corporate personhood. We can easily have a society where corporations are treated mostly as people, but cannot spend unlimited amounts of money on a campaign; in fact, we've had that society for most of America's existence.
Ethically, the results of corporate personhood overall are more difficult to sort out. As the "How Corporations Got The Same Rights As People (But Don’t Ever Go To Jail)" article points out, corporations are recognized as not having a soul. So they aren't really expected to do the "right" thing, just whatever makes them the most money. This view causes problems, however.
For example, Sony was unethical when it installed a rootkit on millions of devices. The idea was to enforce copy protection. However, this software ate up users CPU and made computers more vulnerable to attacks. Furthermore, it was nearly impossible to remove. I think this is akin to selling a little robot along with the CD that would constantly buzz around the house and zap you whenever you tried to copy something you shouldn't. That seems wildly unethical, and I don't see how that situation is any different than the rootkit version.
However, I don't think Sony was sufficiently punished. Sure, they had to pay a fine, but it didn't seem to hurt them as a corporation. If a person committed that sort of hacking scheme and was caught, they would likely spend much of their life in jail. In comparison, Sony seemed to hardly be hurt. Most of the retribution seemed to come in the form of extra-legal hacks. I think this is the largest problem with the way corporate personhood is dealt with in practice. The company pays a fine and most of the employees at fault don't get punished as they should.
Overall, companies get the same rights as individuals. So shouldn't Sony (and companies like them) have just been more ethical in the first place, like individuals generally are? What Sony did was illegal as well as unethical, but what if it had just been unethical? Would that have been wrong since corporations are treated as people?
I would argue no, under current law. Corporations are treated as people mainly out of convenience, not because they actually act as people. The current law requires corporations to respect the desires of their shareholders first. And the shareholders of public corporations just want their stock to increase in value. So companies are legally encouraged to care only for maximizing shareholder wealth. (I'm taking a Corporate Finance class this semester. This is exactly what we are taught the role of a financial manager is: maximize shareholder wealth).
This idea needs to change if we are to expect corporations to be ethical. Perhaps new regulations are in order. Perhaps more ordinary employees and stockholders should serve on the board of companies. I don't know how to make companies more ethical, but some sort of legislative change is needed if we are to expect companies to care about anything other than maximizing profit.
There are other legal benefits of corporate personhood pointed out in that article. For example, if a corporation harms an individual or group of individuals, those individuals can sue the company directly. The company has a much bigger pot of money to dole out than all of its executives combined. Therefore, the plaintiffs actually stand a chance to recover what they've lost.
Recently, however, some more dubious rights have been awarded to corporations; namely the right to spend unlimited amounts of money on political campaigns. This has huge social and ethical impacts on society, because elections are now more than ever influenced by which ever groups have the most money. I think awarding this right of unlimited spending to companies (and individuals, for that matter) was a terrible mistake. However, this mistake is separate from the idea of corporate personhood. We can easily have a society where corporations are treated mostly as people, but cannot spend unlimited amounts of money on a campaign; in fact, we've had that society for most of America's existence.
Ethically, the results of corporate personhood overall are more difficult to sort out. As the "How Corporations Got The Same Rights As People (But Don’t Ever Go To Jail)" article points out, corporations are recognized as not having a soul. So they aren't really expected to do the "right" thing, just whatever makes them the most money. This view causes problems, however.
For example, Sony was unethical when it installed a rootkit on millions of devices. The idea was to enforce copy protection. However, this software ate up users CPU and made computers more vulnerable to attacks. Furthermore, it was nearly impossible to remove. I think this is akin to selling a little robot along with the CD that would constantly buzz around the house and zap you whenever you tried to copy something you shouldn't. That seems wildly unethical, and I don't see how that situation is any different than the rootkit version.
However, I don't think Sony was sufficiently punished. Sure, they had to pay a fine, but it didn't seem to hurt them as a corporation. If a person committed that sort of hacking scheme and was caught, they would likely spend much of their life in jail. In comparison, Sony seemed to hardly be hurt. Most of the retribution seemed to come in the form of extra-legal hacks. I think this is the largest problem with the way corporate personhood is dealt with in practice. The company pays a fine and most of the employees at fault don't get punished as they should.
Overall, companies get the same rights as individuals. So shouldn't Sony (and companies like them) have just been more ethical in the first place, like individuals generally are? What Sony did was illegal as well as unethical, but what if it had just been unethical? Would that have been wrong since corporations are treated as people?
I would argue no, under current law. Corporations are treated as people mainly out of convenience, not because they actually act as people. The current law requires corporations to respect the desires of their shareholders first. And the shareholders of public corporations just want their stock to increase in value. So companies are legally encouraged to care only for maximizing shareholder wealth. (I'm taking a Corporate Finance class this semester. This is exactly what we are taught the role of a financial manager is: maximize shareholder wealth).
This idea needs to change if we are to expect corporations to be ethical. Perhaps new regulations are in order. Perhaps more ordinary employees and stockholders should serve on the board of companies. I don't know how to make companies more ethical, but some sort of legislative change is needed if we are to expect companies to care about anything other than maximizing profit.
Subscribe to:
Posts (Atom)