Sunday, March 26, 2017

Net Neutrality

First off, I have to admit a bias here. I have researched this topic before, and actually wrote to the FCC to encourage them to classify ISPs as a telecommunications company under Title II of the Telecommunications act, mentioned in the "Net Neutrality: What You Need to Know Now" article.

So, what is net neutrality? The best analogy I've heard is to treat internet networks the same way roads are treated. For example, UPS, FedEx, etc. can't discriminate against certain packages. They can't charge Amazon more, they can't charge eBay sellers more; etc. Their only job is to move material from point A to point B, they can't discriminate between what they're moving. Similarly, ISPs can't charge Netflix more to use their internet pipes.

A final simplification: When you purchase internet service, you're just purchasing a pipe that serves out bits. You can't be charged more or less for whatever type of bits you get out of that pipe.

The argument for net neutrality is that without this, ISPs can do some ugly stuff. Comcast significantly slowed down Netflix to force them to pay, for example, before the new rules were passed. It's also not hard to imagine a world where you would have to bundle certain internet services as well. "Base internet access is $1,000,000 a month, the Netflix package is $2 billion and YouTube is $1 billion." They could force you to bundle internet coverage the same way they force you to bundle television channels. All without ever actually laying down any more cable.

The argument for net neutrality is that some unnecessary data (Netflix, etc) is overflowing the pipes and important data (healthcare, etc.) is getting slowed down. So the corporations should be able to charge Netflix in order to lay down new cable and build them a fast lane.

My thoughts: The argument against Net Neutrality seems totally bunk to me. If you have a pipe of size X, you can distribute that pipe to N people, then each person pays for a piece of the pipe. Equally distributing this pipe would give everyone in the neighborhood X/N bandwidth. That's a terrible system, but that's not what net neutrality says. The ISPs can divide their bandwidth however they want, based on who pays the most. So everyone pays a certain amount and gets a pipe of a certain size to fit their needs.

So if I'm at home trying to sign up for healthcare (or something else important) and the website is super slow because my brother is using all the bandwidth on Netflix, I don't think "curses, if only the ISP slowed down Netflix for me so that this wouldn't happen." I think "We need to buy a bigger internet pipe" or "Get off Netflix! I'm doing actually important stuff!"

In my opinion, there's no reason to artificially discourage use of Netflix by paying more for it. If I notice the internet slowing down, I'll buy a bigger pipe, which will give ISPs the money they need to lay down more cable. I don't want to be forced to purchase the "Netflix package" to actually enjoy a show.


How to implement it is a much more difficult question. I think the best way would be monitoring internet speeds for certain websites to make sure the ISPs don't slow down one particular site, and responding to any lawsuits. Also monitoring ISP deals and offers to ensure they don't create "The Netflix Package." I don't really care how much this "burdens" Comcast. I'll care once competition is restored, but without competition, someone has to fight the monopoly and enforce burdens.

I don't understand how Net Neutrality prevents innovation. If anything, it supports it. Without neutrality, ISPs could easily charge a fee to every website ever to avoid being massively slowed down. That would make creating a new website much more expensive. Currently, you can set up a website that serves bits as quickly as Google, Facebook, or any other giant. This has lead to a plethora of new and interesting independent sites. The loss of net neutrality could stifle that innovation.


Finally, whether "The Internet is a public service and access should be a basic right." This one is harder. We think that about roads, but roads aren't partially owned by corporations. Perhaps the best example is electricity? But is electricity a "basic right"? (A quick Google shows that electricity being a basic right is still somewhat debated). So on the spectrum of basic rights: Life, liberty, pursuit of happiness: Yup. Electricity: Probably? Taking it away from the country for a long period of time would kill people. Internet: Probably not? Taking it away wouldn't kill people or deprive them of basic rights...yet.

Should it be a public service, regulated like a utility like water or electricity? Probably, and I think it's becoming more necessary. Schools usually assume you have internet access and require it for assignments. Many services are going entirely online, like banking, flight and hotel booking, etc. Soon enough, I posit that internet access will be assumed, and not having a connection will severely impact your ability to function in society (manage finances, file taxes, book trips, buy anything). Therefore, letting corporations pick and choose which (legal) data can go through their Internet pipes sounds like a terrible idea.

Sunday, March 19, 2017

Corporate Personhood (and Sony)

Corporate personhood is the idea that a corporation is legally a person. It has (some) rights, it can be sued, etc. This is useful in many cases. As the "If Corporations Are People, They Should Act Like It" article points out, it means that the government can't just barge in and seize all of Google's servers, because Google is protected with the Fourth Amendment rights to be free of unreasonable searches and seizures.

There are other legal benefits of corporate personhood pointed out in that article. For example, if a corporation harms an individual or group of individuals, those individuals can sue the company directly. The company has a much bigger pot of money to dole out than all of its executives combined. Therefore, the plaintiffs actually stand a chance to recover what they've lost.

Recently, however, some more dubious rights have been awarded to corporations; namely the right to spend unlimited amounts of money on political campaigns. This has huge social and ethical impacts on society, because elections are now more than ever influenced by which ever groups have the most money. I think awarding this right of unlimited spending to companies (and individuals, for that matter) was a terrible mistake. However, this mistake is separate from the idea of corporate personhood. We can easily have a society where corporations are treated mostly as people, but cannot spend unlimited amounts of money on a campaign; in fact, we've had that society for most of America's existence.

Ethically, the results of corporate personhood overall are more difficult to sort out. As the "How Corporations Got The Same Rights As People (But Don’t Ever Go To Jail)" article points out, corporations are recognized as not having a soul. So they aren't really expected to do the "right" thing, just whatever makes them the most money. This view causes problems, however.

For example, Sony was unethical when it installed a rootkit on millions of devices. The idea was to enforce copy protection. However, this software ate up users CPU and made computers more vulnerable to attacks. Furthermore, it was nearly impossible to remove. I think this is akin to selling a little robot along with the CD that would constantly buzz around the house and zap you whenever you tried to copy something you shouldn't. That seems wildly unethical, and I don't see how that situation is any different than the rootkit version.

However, I don't think Sony was sufficiently punished. Sure, they had to pay a fine, but it didn't seem to hurt them as a corporation. If a person committed that sort of hacking scheme and was caught, they would likely spend much of their life in jail. In comparison, Sony seemed to hardly be hurt. Most of the retribution seemed to come in the form of extra-legal hacks. I think this is the largest problem with the way corporate personhood is dealt with in practice. The company pays a fine and most of the employees at fault don't get punished as they should.

Overall, companies get the same rights as individuals. So shouldn't Sony (and companies like them) have just been more ethical in the first place, like individuals generally are? What Sony did was illegal as well as unethical, but what if it had just been unethical? Would that have been wrong since corporations are treated as people?

I would argue no, under current law. Corporations are treated as people mainly out of convenience, not because they actually act as people. The current law requires corporations to respect the desires of their shareholders first. And the shareholders of public corporations just want their stock to increase in value. So companies are legally encouraged to care only for maximizing shareholder wealth. (I'm taking a Corporate Finance class this semester. This is exactly what we are taught the role of a financial manager is: maximize shareholder wealth).

This idea needs to change if we are to expect corporations to be ethical. Perhaps new regulations are in order. Perhaps more ordinary employees and stockholders should serve on the board of companies. I don't know how to make companies more ethical, but some sort of legislative change is needed if we are to expect companies to care about anything other than maximizing profit.

Sunday, March 5, 2017

Internet of Things

I've had a long-running argument with a friend that the Internet of Things can be a good thing. She is convince that it is the worst idea conceived by man, and will be the downfall of mankind. After reading the articles, I'm starting to see where she's coming from...although I think it can be repaired.

The motivations for the IoT is to make everything easier to access and smarter. Home controls that you can configure online. Dumpster sensors that make garbage collection more efficient. Cars that can talk to the internet. Even things like Echo and Alexa that listen and process everything you say, so they can respond to your wish immediately. You can have control over your own things from anywhere, which is quite convenient.

The main problem is, of course, security. If you can access that webcam remotely, how do you make sure no one else can? If your car can send and receive arbitrary data from the internet, how do you make sure your users don't download a virus? (Or at least, how do you guarantee that the virus can do no harm?)

So what should programmers do about security? In a perfect world, they would develop 100% safe code. But that will never happen. So they should insist on increasing security efforts to their managers, but ultimately, how much effort to spend on security is up to the company. Security requires a significant effort from a team of engineers; the company must decide to allocate these resources.

Because the companies decide how secure a product is, how much security is put into the device is directly proportional to how much consumers would care about a failure, because the more secure the thing is, the more expensive it is.

Therefore, cars seem relatively secure (despite the inflammatory articles). Consumers are hyper-sensitive to car hacking, and companies are even more so. I'm sure Ford has every possible incentive to make sure its cars are hack-proof. Imagine all Fords on the highway suddenly stopping, or veering off the road. It would destroy the company instantly. In this industry, interests are aligned; the industry sees the need to invest in security.

However, most IoT industries don't have this drive. They are pushed to create the "minimum viable product," always pushing down costs. As one article puts it, "Consumers do not perceive value in security and privacy. As a rule, many have not shown a willingness to pay for such things." Security becomes a second thought, because consumers don't seem to care about buying a cheap, insecure webcam.

Which brings up the idea of who is liable when breaches occur. Ideally, the company who made the item. This is obvious in the case of cars, but what if someone just set no password? Or had a really bad password? It's the user's fault, right? But what if the default is no password? Then the company is probably to blame. Also, how do you discover a webcam hack? You know if your car stops on the road, but how do you know that you're being watched? In short, companies should be liable, but the waters get really murky really fast.

I think the government needs to step in and regulate this industry. Have certain security requirements for anything that connects to the internet. (Although there might be an inherent conflict of interest with the surveillance discussion from last week...). The impact of an insecure IoT is frightening. With microphones and cameras, others could hear and see everything about your life. Whether that's hackers or the government, I think that's a dangerous road to walk down. Even the devices that don't have cameras/microphones could (and have been) used as a botnet. Those are two serious problems with an insecure IoT.

Overall, do I fear the Iot? Billions of interconnected devices? Yes and no. At the current pace of security, yes. I would not buy a smart home, webcam, Echo, or Alexa. I think security has the potential to improve, however. If meaningful strides are made, perhaps overseen by a new government agency (a few administrations down the line, I guess), I would trust a web of objects. But not today.

Sunday, February 26, 2017

Snowden

I initially thought Edward Snowden was a traitor. It started because I didn't think that recording the metadata of phone calls was that bad. I had a belief that the government just wouldn't do anything bad with it; "big brother" arguments were all fiction. What harm would the government try, anyways? Target protesters? This is America, not a totalitarian state, I haven't really ever heard the government denounce protesters before. They seem to respond "Well Ok I guess people care about that, time to make that a priority."

Over the last few years, though, I've revised those ideas. Better to make it impossible for the government to collect such data; if the data can be abused to better the government at the expense of the people, it probably will be, eventually. Rather than trust the people of the government to do the right thing, we should create a system where it is impossible for them to do the wrong thing.

I've also mostly decided that the collections the NSA were doing went too far. What remains more troubling to me, however, is their loose interpretation of Section 215 and the fact that Congress was not properly aware of the situation. That seems dangerous to me; the executive branch gaining too much power because it refused to inform the others.

So in that sense, Snowden was justified. The NSA was collecting a level of data that I think was too much, and on top of that they were collecting it illegally and without the consent of the legislative branch of government.

However, Snowden diluted this story with some major missteps. Instead of leaking only information pertaining to the phone metadata, he dumped millions of documents on the media. He went to the media before he went to Congress. He also fled the country to seek asylum in China and Russia.

He leaked too much data. There was too much information there to make a lasting story. The phone records information stuck, but also got somewhat lost in a debate over the rest of it. There were less impactful surveillance schemes that were probably wrong but distracted and diluted the main message. Foreign surveillance information was also outed in these documents, detailing U.S. spying on adversaries and allies alike. This caused real harm to U.S. relations, and drew attention away from the phone records story and more attention to Snowden being a traitor.

He gave the information to the media. The idea was they would be less biased in what to show and what to keep secret, but (as the "Yes, Edward Snowden is a Traitor" article put it) "society has not appointed journalists or newspaper editors to decide these matters, nor are they qualified to do so." They're top priority is not the public welfare, but selling news. They held some items back, but arguably published more than they should have, and also improperly redacted items in some cases. Also, there was information too secret to report. That information now rests on the media's less secure servers, and was read by reporters without security clearance.

He fled the country. Some people have labelled him a coward for this, for breaking a law but not sticking around to try to prove himself justified; for not facing the consequences of his actions. I'm not sure where I stand on this; it's easy to criticize someone for not sacrificing themselves, but then again would I decide to go to jail as a traitor? I'm not sure. However, there is a tradition of people who knowingly break the law (e.g. flag burning) or whistleblow against their companies (e.g. Roger Boisjoly) and accept the consequences. This seems to fly in the face of that tradition.

Also, he carried sensitive U.S. information to adversaries who have complete control over him. For example, Russia could easily pressure him to reveal government secrets; if he doesn't talk, they can extradite him.

In summary, what he did was obviously illegal and partially unethical. There were malicious secrets kept from the American public as well as a lack of Congressional oversight. It was ethical to reveal these, regarding the collection of phone metadata. However, the three main points outlined above were not ethical; ideally he should have leaked much less data, sent it to Congress instead of the media, and remained in the country to try to prove himself in court.

Do the benefits outweigh the harms done to the American public? Hard to say. One article mentions that being aware of possible NSA surveillance probably spurred tech companies to encrypt more of their users' data, trying to avoid a "big brother" scenario. Americans were more aware of the possibilities of espionage, but a Pew survey showed that not many people overall thought worse of the NSA. Laws were eventually passed to forbid the NSA from collecting phone metadata, but the rest of the Patriot Act remained in effect. And there were real harms in terms of relationships with allies. And maybe terrorists will be more careful about how they communicate, but I find it difficult to imagine they weren't careful before. All in all, I think it's about a wash as far as the public well-being goes, but it's really hard to tell.

Personally, the whole discussion has made me more aware of government surveillance. I went from the idea that "if you have nothing to hide, you have nothing to fear" to a much more cautious "We really shouldn't give the government something it could abuse in the future."

Thursday, February 23, 2017

Hidden Figures Podcast Reflection

First off: We made a podcast! Recording it was much easier and fun than I expected, and I didn't hate the sound of my own voice. Which was a strange experience, because I used to abhor recordings of myself. Editing it was much harder. Conversations that I thought were really coherent weren't. I wound up putting in little bloops for when the conversation switched significantly (and I cut out a bunch of stuff in between), but I'm not sure that was the right decision. It was really fun to edit but took a lot more time than I expected. Overall, it was a great experience.

Moving on to the meet of the response:

The main obstacles women and minorities face are established groups that are prejudiced against hiring and advancing them. Also, the general society they grow up in may implicitly or explicitly try to teach them that STEM is a men's world, so they are discouraged from being interested. 
One reason this might be so challenging to break is that engineers hire other engineers that look and think like them with rigorous technical interviews that accidentally maximize People Like Us bias. (I've covered this extensively in a previous blog post).

I don't think famous role models are important. I don't think I ever had one. The Mythbusters would be about the closest I ever had to a popular role model. However, my dad was much more important. He works in Computer Science, and he would discuss work at the dinner table once in a while. I never really understood what was going on (I distinctly remember a conversation where I had no idea why a computer would need a "clock cycle"), but he was interested in his work and seemed to like it. Even if I couldn't understand the problems, they sounded intriguing, and so did his process for solving them.
So I don't think popular role models have as much of an effect as people seem to think - at least, not to me. I think what's more important is someone close to you to encourage you to try out the field. Also, now that I think of it, when I was trying to decide between joining Science or Engineering, both my parents pushed towards engineering, since that's what they did. So they must have had a significant impact on everything else in my life that helped guide me to choose STEM.

Sunday, February 19, 2017

Challenger

(Note: writing this as a blog is still weird. I feel like I need to say: "I know the Challenger is a random topic but I have to for a class" despite the fact that I know that only the professor and TAs will ever read this. Oh well. blogs.)

So, the Challenger disaster. I only recently learned that engineers were against the launch before it happened. I knew that the O-rings failed, but I didn't realize that it was so predictable that they would fail.

What were the root causes? Some of the articles made it sound like all the engineers were clamoring for the launch to stop, but management refused to listen to them. I think it was more subtle than that. It sounds like there were a lot of communication problems, and at least two different parties involved (NASA and Morton Thiokol). Also, in hindsight, engineers complaining about the part that caused the disaster seems ominous, but at the time, it was an O-ring. One of thousands of parts to an incredibly complicated rocket. How do you weight part one has a significant problem and which one is engineers needlessly fretting? Basically, management weren't incompetent idiots. I'd wager they were weighing many different possibilities, and the O-rings didn't strike them as particularly dangerous.

But that's not to say there weren't problems. That shuttle never should have launched. It sounds like there needed to be clearer communication. Someone refused to sign off on the launch. That sounds like a huge red flag, but it was ignored; his boss signed off.  The engineers had data, but didn't represent it convincingly. When they brought up arguments, they were quickly dismissed. I think the managers were allowed to get into a structure of groupthink. They too quickly disregarded the views of their underlings, and were probably too focused on not delaying a heavily watched launch, messing with the schedules of millions of viewers and the first civilian astronaut. I think the root cause was the system; there needed to be an established way for a concerned engineer to attempt to block the launch. If she/he is willing to go through that much trouble, something must be wrong, and the arguments should be heard.

Roger Boisjoly is an interesting case. He didn't share his concerns with the public beforehand, but did in the investigation afterwards, which technically isn't whistleblowing since it's after-the-fact. I still think he was justified, though. The public needed to know about NASA's flawed system, so that NASA would be motivated to fix it. It was more whistleblowing about managers ignoring data than whistleblowing about the accident itself, and ignoring warnings is a serious problem.

However, this additional oversight didn't happen. In 2003, Columbia disintegrated. Why? Maybe the story of Roger Boisjoly didn't become popular enough; everyone only remembered the O-rings. Maybe the company's retaliation worked, and discouraged other engineers from speaking out with their concerns again. I think the retaliation is the worst part of this all. The public (and the government) needed to know that warning signs were ignored, so they would be heeded in the future. Punishing him was counterproductive.

Also, whistleblowing is worth it, even if it destroys your career. Doing so has the chance to save lives or benefit society while damaging the company you work for. That can get you fired and make it difficult to hire you, but keeping quiet is unethical.

Sunday, February 12, 2017

Diversity

I think the general lack of diversity in the computer science industry is an issue that needs to be addressed. What's the best way to address it? Hell if I know. But it's a thing, and I don't think should be.

Focusing on gender, some articles have addressed the idea that on average males are better suited to the more engineering-y fields for various reasons. I think this carries some weight, but I think it only explains a small part of the gap. (Also, on-average is key here. As Hari Seldon would say, statistics and Psychohistory cannot predict the thoughts or actions of individuals. -Asimov's Foundation reference.)

I think the larger factor here, however, is unintentional discrimination. Some evidence for this: the number of women in computer science has been declining. That suggests non-evolutionary reasons (unless we're devolving really, really fast...somehow).

I think this discrimination explains a large part of the diversity gap, too. We've read before about how the computer science giants "hire only the most perfect-est hire imaginable. Ditch 100 perfect ones for the one that is even better." But that tends to maximize for unintentional bias. You want to hire the guy you really connect with, which basically means you're much more likely to hire someone very similar to your culture. So if you're a white male you wind up hiring other white males because of People Like Us syndrome.

Another factor might be access to computers. Colleges assume you know your way around a computer already. Can you imagine asking basic Windows questions in a fundamentals of computing course? "What's a file? What do you mean by 'double click'? Control-what-delete?" I'm sure there are lots of other skills about using a computer that I just take for granted, since I've had one all my life. Typing takes time to learn. Is moving the mouse intuitive, or does it take a while to master? I honestly don't remember. But those are the basics. There have to be thousands of things you can only learn through practice and familiarity.

Lower income students are less likely to have had time to get familiar with computers, so they're already at a disadvantage in the field. According to the "When Women Stopped Coding" article, girls get less access to computers than boys do, even if they display interest. Also, it seems like companies expect you to have been coding since you were a child. That's really difficult if you're sharing a single computer among a family.

So the culture of the people making hiring decisions and economic status have an over-sized effect on diversity in the computer science industry. The first because "only hiring the best" accidentally creates a focus on hidden biases, and the second because access to your own computer could be critical for developing your skills early on.

So what should be done about the situation? I don't know. Blind interviews have been suggested to try to eliminate some forms of bias. I think it goes without saying that Breitbart's idea of a cap was terrible. Harvey Mudd seems to be doing a good job encouraging women to pick up the field by making it more interesting and accessible to them, especially if they aren't as familiar with computers.

This isn't the just state of affairs; change is required to even the playing field.