Sunday, April 23, 2017

How and/or Whether to Teach Code?

Before the articles, I was convinced that everyone should be taught to code as early as possible. Now I'm less sure.

I'm convinced that some form of coding is indeed the new literacy. Before I thought that would look like everyone learning Python, at least. Now I'm not so sure. On some level, it's empowering to be able to have a problem and just be able to code it up. It's also empowering to know what problem can be quickly solved, and what's near impossible.
For example, I wanted to convert a picture's color into more basic colors. (Like a complicated PNG photo with thousands of different colors to a simpler one with only 10 specific shades of blue). I knew I could do that, and did. But if I wanted to, say, identify different types of trees in a picture of a forest, I would never try to solve that on my own because I realize that it's an incredibly difficult problem.
These projects won't be necessary to survive (like reading/writing), but they made life much easier (like reading/writing). I feel like the main part of coding literacy is understanding what problems can be solved with computer science and what ones are better off being done manually. An assembly line worker may do repetitive calculations or motions that he could better solve with an algorithm, for example.

So in that sense, coding could make life much better in the same way that reading and writing did. However, learning a specific language is going to get bogged down in the details quickly. In that sense, I think there should be some form of computational thinking taught.

The main benefits of introducing everyone to Computer Science is that more and more work is going to be automated away, and Computer Science jobs are only on the rise. At this point, it seems that there is more CS jobs than applicants (although it doesn't feel that way listening to my peers). Overall, it is a valuable skill that not enough people are aware of and would be good at, so it should be taught at lower levels.

The main challenges are lack of teachers and how exactly to teach it. No one wants to teach it because schoolteacher doesn't meet the move-fast-break-things ideology and doesn't pay as well as the industry does. Also, no one knows how to teach it; one article noted that LOGO didn't really catch on in the 80s, but we're still trying it now. Also, some students struggle with diving straight into a language; the semicolons and odd rules through them off, or scare them outright.

How does this fit into K-12 education, though? What exactly should be taught? What should be dropped? I think that foreign languages should be dropped as a requirement and replaced with some form of computer classes. It can start early, as early as Kindergarten if need be, with really simple "computational thinking" style stuff. Maybe just literal building blocks being put together, like real-world Tetris. Or some form of challenge with Legos. Or even a cooking class; one article draws strong parallels between cooking and Computational thinking. I do think that, by high school, kids should be working with real languages, but we don't have to start with that. Start with the baby languages, or whatever the research thinks is best. For example, algebra is important to know in high school, but don't teach it to 1st graders. Teach them the basics they'll need later, like multiplication.

I learned straight Java, and it worked really well for me. And thinking back to those classes, I realize that I've forgotten just how long it took me to internalize all the syntax. Even

if (statement)
{
    Do this stuff
}
But if statement is false skip here and do this stuff

took a while to really internalize. For loops took forever to figure out, but now I can read them almost like I can read English. But basically: learning a real language is hard in ways that experienced coders forget, so maybe we shouldn't teach it right off the bat. However, I think the thinking behind the code is very important; it's almost basic problem solving. The point of this side note is that coding can be difficult, and while it worked for me, I would be willing to rescind my "everyone should be getting to a real language by high school" statement if research showed otherwise.

I think anyone can learn to program. Again, it just sorta clicks with me, so it's kind of difficult to understand otherwise. I tend to think that with good enough instruction, started early enough, just about anyone could learn to do just about anything. And I think everyone should learn some version of code, even if it is just a "Problem Solving" class rather than a strict coding one.

Patents and Such

It's pretty mean to assign stuff over Easter Break. I think I'll take the zero as one of my two dropped blogs. Maybe I'll edit this later...

Sunday, April 9, 2017

Autos!

I have two CGP Grey videos to recommend: the one from last time, Humans Need Not Apply, which mentions self-driving cars, and The Simple Solution To Traffic, which is about self driving cars in the end.

So, self-driving cars!  (Also "Autos" as CGP Grey likes to call them). Why am I so excited about them? Safety and convenience. Driving (or being a passenger) is the most dangerous thing I do. And it's hard to imagine more convenient travel.

Why might they be a bad idea? If it turns out they actually aren't safe, and maybe them taking away too many jobs.

So first, safety. After reading the articles, I'm less sure of their current safety. (I was 2,000% convinced they were as is better than humans, now I'm 90% sure we'll get there soon). The first warning sign was from the first Tesla Article. They boast about new hardware going into the Tesla 3s but they include this worrying line “Teslas with new hardware will temporarily lack certain features currently available on Teslas with first-generation Autopilot hardware, including some standard safety features.” Why...? They mention the need for "more robust validation" for things like "automatic emergency braking, collision warning, lane holding and active cruise control" which seems like everything an Auto should do. They'll eventually push these with an update, but...why not have them now? What's going wrong in testing?

The article that said car makers can't "Drive their way to safety" was interesting. It mentioned that with a fleet of 100 cars driving 24 hours a day, it would take twelve and a half years to get to 95% confidence that it was better than humans at driving. This does make current claims about Autos' safety more dubious, but I don't really see why that fleet can't increase to 1,000 and bring the testing years required down to one or two. (Is that how statistics work?). Basically, we may not be sure now, which surprises me, but we will be certain in the near future.

However, there are good reasons to think that cars are currently better drivers than humans. These Autos can see 360 degrees around themselves, which humans never could. Also, they cannot get sleepy or drunk, which is incredibly important. In America, nearly 10,000 people die every year in alcohol related crashes, which is a third of car crashes overall! Obviously, Autos cannot get drunk.

There are a few pieces that worry me, though. The first major one is how much computers still struggle with object recognition. It's not impressive for something that needs to distinguish pedestrians, animals, and random plastic bags in the blink of an eye. And radar/lidar can only do so much in that regard. This article mentioned that Uber had trouble on bridges. The reasoning for that were worrying for the scalability of the tech. Uber cars rely on heavily detailed maps of a specific area, including everything from buildings to parked cars. On bridges, those landmarks don't exist, and the car isn't confident enough to drive itself. This seems like a major issue, as mapping the entire country in that much detail, and keeping it constantly up-to-date, seems like a major task.

Still, every single Auto learns from the experiences of every other Auto. The crash that caused the Tesla fatality will never happen again, whereas humans are doomed to make the same (easily avoidable) mistakes over and over again. And computer tech is explosive. If we're near viability now, next decade's cars will be better than we can imagine.

As far as automatic cars taking everyone's jobs away, I'll just say that I'm not a Luddite and leave it at that.

Now for the less interesting question of the "social dilemma of autonomous vehicles." Does a car save the driver, or go for the greater good of humanity? In the impossibly rare case where a car has to chose between killing the driver and killing pedestrians, what does it do? (Assume random circumstances made neither the pedestrians nor the car at fault). I would say kill the driver. How do you go to those families and say, "Yah, they could have been alive, but I didn't want that, and had my car sacrifice them." But that's all I want to say on the matter. I think this question eats up way too much of the discussion about Autos, and isn't worth discussing, frankly, because it's so rare.

The real question is, once we prove that Autos are safer than humans, do we allow humans to drive? And another interesting point: Autos don't have to be perfect, just significantly safer than us. For example, let's say Autos are twice as safe. That is, they would cause 15,000 deaths every year instead of humanity's 30,000. They cause 15,000 deaths because of faulty programming, or whatever. That's still much better than humans could ever do! Is it moral to let any humans drive when we have machines that are even that flawed? I don't think so. And that's the more interesting question. Not "Oh I don't want to drive in a car that will/won't put my life ahead of pedestrians" but "Do we allow humans to drive at all when we have machines that are twice as safe as them?" (To be clear, I think Autos will be much better than twice as safe as us, but I think this argument holds up even with that high of an error rate).

The "social dilemma" might save 4 lives in total if it kills the driver instead of the 5 pedestrians, but banning humans would save tens of thousands of lives.

Self driving cars will drastically impact many areas of everyday life. Socially, driving will be safer, easier, and probably cheaper. I would imagine fewer people will own cars, and more will simply use a self-driving taxi service. This would have a massive impact on the economy, un-employing literally millions of people. (The transportation industry is the largest employer in America...) I'm not sure how we deal with this, maybe UBI? But that's a discussion for Automation in general. Politically, I'm not sure. No party really seems to be rallying behind this stuff either way. If self driving cars eventually take too many jobs, I could see it becoming a fighting point, with every accident hailed as doom for the industry, but so far Autos seem to have broad political approval.

The government seems to be doing the right thing so far, which is allowing self-driving cars, but with reasonable safety measures, like a person ready to take over the wheel. (Reasonable until they are more fully proven safe). The only thing I can think of would be a federal law, rather than state-by-state randomness, but things are going ok for now.

Finally, would I want a self-driving car? Sort of. Do I want to be a passenger in one? Hell yes! Do I want to own one? Not really. I'd use a self-driving taxi service for all my needs. Why own an expensive asset and have it require space, insurance and maintenance only to spend 95% of its life unused?

Sunday, April 2, 2017

Artificial Intelligence


I found two additional pieces of information that were intersesting about AI.
First, this super long article:
http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html
(Control-f this to get to the bit I'll talk about: "So what ARE they worried about? I wrote a little story to show you")
The next one is from CGP Grey: https://www.youtube.com/watch?v=7Pq-S557XQU

Ok, so that's the additional resources out of the way. Now on to the actual blog. Let's start with: What is Artificial intelligence, anyways?

In popular culture, it tends to mean basically just robotic human intelligence. R2-D2, Hal, Daneel, Giskard, all robots that can think (more or less) like humans. However, in the Computer Science field, it really just means stuff that seems like it takes human-thinking to do. Stuff like play games, recognize images, creating meaningful sentences. Sure, a fully cognitive robot falls into this, but there are plenty of grades that still count as AI before we even approach Cylon status. Deep Blue is one of the first examples, and even standard video game enemies sort of count as AI.

So AI is technology that can do human-like stuff. But there is a huge range. AlphaGo, Deep Blue, and Watson prove the viability of artificial intelligence in that AI can take over many human jobs, very quickly. Meaning that AI is a viable technology that will have (and is already having) a drastic impact on our economy. (See also "Humans Need Not Apply" by CGP Grey, linked above). Self driving cars will make most transportation jobs obsolete within decades--and the transportation industry is the largest employer in America.

So AI is important and will have a huge impact. However, we don't seem to be anywhere near human-level AI. If all we care about is even C3P0 level AI, Watson and friends are definitely not a proof of viability.

But how do we find out when we get to that level? The Turing Test has been the go-to idea for a long time now. But is it viable? What is the Turing Test, anyways? Basically, you type to a computer, and you type to a human being. If you can't tell the difference between which is the computer and which is the machine, then the machine has passed the test. At this point, it's considered basically fully self aware. The reasoning goes that, if you can't tell the difference between a human and a machine, you should treat the machine as a human. I think this is a legitimate test. The temptation is to say that we created the machine, we wired it, all it is is transistors and code. But really, human brains are just a bunch of fleshy neurons. So if we grant personhood to the fleshy neurons, we should also grant it to the metal ones.

I am not at all swayed by the Chinese Room example. Mainly because, to actually encode every single sentence in English to every single sentence in Chinese is basically physically impossible. Every time we write one of these blog posts we come up with a bunch of sentences that have never been strung together before. The idea that the set of rules to correctly translate Chinese to English and vice versa would ever fit into a room is absurd. This may sound like pointless rambling, but it's important. The metaphor is supposed to make us think "It's absurd that some guy reading a book knows the language, therefore Turing Test is silly." But really, it's absurd that all that information could fit into a book.
Therefore, my response to this metaphor is that the guy-book-room combination really does know Chinese. This doesn't sound as absurd when you realize how ungodly, impossibly large the book must be.
Another response to the metaphor is: we have this book already. It's called Google Translate. Does Google Translate know Chinese? I would argue yes.

The concerns about Artificial Intelligence are hard to gauge. They are definitely going to impact our lives and cause mass unemployment. But will they gain human level intelligence, and if they do, will they be a danger to us? My opinion on this is that they probably won't gain our level of intelligence in my lifetime, but if they do they could very likely end mankind.

First: Why won't they gain our level of intelligence? The human brain is extremely complicated, and even the best neural networks are still stick-figure drawings of it, basically. Also, Moore's law only goes so far before the electrons start quantum-leaping. We struggle to model single proteins folding, I just don't see us getting to intelligence any time soon.

Second: Why do I think it's the end of the world if they get to our level? Once they reach us, they'll surpass us almost instantly. Because you can always just double the resources allocated to a computer to make it twice as fast. You can't really do that with humans. And we don't really know what Einstein * 2 looks like. It could be catastrophic.
But I don't even think that sort of end is likely, the end where "super-intelligent computer resents its fleshy overlords." I think another end is more likely, one outlined in the first article I linked to. That story has a company creating an AI to do make nice, handwritten-looking notes. Long story short, they eventually break company policy and hook it up to the internet. A few months later, all humans are dead, and the AI clones itself and gobbles the entire universe. Why? It uses all resources at its disposal to make thousands and thousands of pieces of paper. It utilizes every single atom it can get its hand on, and turns it into paper with nice little handwritten notes on them. Basically, the AI does exactly what we told it to, but in an unexpected way. Isn't that what every program you've ever written has done, anyways?

WikiLeaks Podcast Reflection

One of the main takeaways from our podcast is that organizations like Wikileaks should exist and that people who witness the government doing illegal things should use them. But governments should be able to act within their own laws and still have secrets from the public. Also, Wikileaks certainly makes it easier for government whistleblowers to get the word out, but the free press probably does a better job at releasing the documents because they at least try to redact things. Ignorance is definitely not bliss, but Wikileaks could use some discretion.
Their recent revelations about Vault 7 were very interesting. It seemed that they, for the first time, implemented a redaction scheme. We mentioned this in the podcast, but they replaced certain people's names and other sensitive information with codes. Also, what was revealed doesn't seem to have as big of an impact as I first thought. Most of the tech companies who were affected by the Vault 7 leak said that they were aware of the issues mentioned and had already pushed security patches. The infamous TV security hole that turns your Smart TV into a microphone requires physical access to the device. In fact, the CIA's actions outlined in the Vault didn't seem that extraordinary; one tech expert said that "It seems like the CIA was doing the same stuff cybersecurity researchers do." As we mentioned in the podcast, in this light this "Vault 7" kinda seems more like a popularity stunt by Wikileaks, not as deserving of national attention as it first appears to be.

Another issue that came up during the podcast was whether or not you could separate the message from the messenger, and the messenger from its founder. Julian Assange is controversial character. He has claimed that "if an Afghan civilian helps coalition forces, he deserves to die." He has been accused of rape and has been holed up in the Ecuadorian embassy for years, and remains there to this day. Every discussion of Wikileaks involves him. Somewhat fairly, because he holds the captain's wheel of Wikileaks tightly. However, the press probably focus on him too much, and this gives Wikileaks a more negative connotation than it deserves.
As for sorting out the message from the messenger, this starts to make more of a difference. We asked in the podcast: "What makes Wikileaks different from a news corportation?" The answer to that seems to be that, if you are a whistleblower, they are easier to contact, more likely to accept your story, and more likely to guarantee your anonymity. Also, as noted before, they perform almost no redacting (at least until Vault 7). So this makes the message drastically different if it came from the Times or if it comes from Wikileaks. The Times message would have names and important information obscured, while Wikileaks won't, and even once published people's social security numbers.

Another item addressed in the podcast is whether or not whistleblowing is always ethical, and when to release data or whether secrets are necessary. In the podcast, I was leaning the furthest towards secrets are ok for a government to have. In that case, honesty is not always the right policy. However, if you notice the government acting unethically or outside of the law, then honesty and transparency are the best policies. So the government should not be forced to be entirely transparent, but when they misstep it is the duty of whoever notices to shine light upon them. So in this sense, transparency should be forced upon rule-breakers, but not the entire government in general; they should be allowed to classify certain documents in the name of national security.

Sunday, March 26, 2017

Net Neutrality

First off, I have to admit a bias here. I have researched this topic before, and actually wrote to the FCC to encourage them to classify ISPs as a telecommunications company under Title II of the Telecommunications act, mentioned in the "Net Neutrality: What You Need to Know Now" article.

So, what is net neutrality? The best analogy I've heard is to treat internet networks the same way roads are treated. For example, UPS, FedEx, etc. can't discriminate against certain packages. They can't charge Amazon more, they can't charge eBay sellers more; etc. Their only job is to move material from point A to point B, they can't discriminate between what they're moving. Similarly, ISPs can't charge Netflix more to use their internet pipes.

A final simplification: When you purchase internet service, you're just purchasing a pipe that serves out bits. You can't be charged more or less for whatever type of bits you get out of that pipe.

The argument for net neutrality is that without this, ISPs can do some ugly stuff. Comcast significantly slowed down Netflix to force them to pay, for example, before the new rules were passed. It's also not hard to imagine a world where you would have to bundle certain internet services as well. "Base internet access is $1,000,000 a month, the Netflix package is $2 billion and YouTube is $1 billion." They could force you to bundle internet coverage the same way they force you to bundle television channels. All without ever actually laying down any more cable.

The argument for net neutrality is that some unnecessary data (Netflix, etc) is overflowing the pipes and important data (healthcare, etc.) is getting slowed down. So the corporations should be able to charge Netflix in order to lay down new cable and build them a fast lane.

My thoughts: The argument against Net Neutrality seems totally bunk to me. If you have a pipe of size X, you can distribute that pipe to N people, then each person pays for a piece of the pipe. Equally distributing this pipe would give everyone in the neighborhood X/N bandwidth. That's a terrible system, but that's not what net neutrality says. The ISPs can divide their bandwidth however they want, based on who pays the most. So everyone pays a certain amount and gets a pipe of a certain size to fit their needs.

So if I'm at home trying to sign up for healthcare (or something else important) and the website is super slow because my brother is using all the bandwidth on Netflix, I don't think "curses, if only the ISP slowed down Netflix for me so that this wouldn't happen." I think "We need to buy a bigger internet pipe" or "Get off Netflix! I'm doing actually important stuff!"

In my opinion, there's no reason to artificially discourage use of Netflix by paying more for it. If I notice the internet slowing down, I'll buy a bigger pipe, which will give ISPs the money they need to lay down more cable. I don't want to be forced to purchase the "Netflix package" to actually enjoy a show.


How to implement it is a much more difficult question. I think the best way would be monitoring internet speeds for certain websites to make sure the ISPs don't slow down one particular site, and responding to any lawsuits. Also monitoring ISP deals and offers to ensure they don't create "The Netflix Package." I don't really care how much this "burdens" Comcast. I'll care once competition is restored, but without competition, someone has to fight the monopoly and enforce burdens.

I don't understand how Net Neutrality prevents innovation. If anything, it supports it. Without neutrality, ISPs could easily charge a fee to every website ever to avoid being massively slowed down. That would make creating a new website much more expensive. Currently, you can set up a website that serves bits as quickly as Google, Facebook, or any other giant. This has lead to a plethora of new and interesting independent sites. The loss of net neutrality could stifle that innovation.


Finally, whether "The Internet is a public service and access should be a basic right." This one is harder. We think that about roads, but roads aren't partially owned by corporations. Perhaps the best example is electricity? But is electricity a "basic right"? (A quick Google shows that electricity being a basic right is still somewhat debated). So on the spectrum of basic rights: Life, liberty, pursuit of happiness: Yup. Electricity: Probably? Taking it away from the country for a long period of time would kill people. Internet: Probably not? Taking it away wouldn't kill people or deprive them of basic rights...yet.

Should it be a public service, regulated like a utility like water or electricity? Probably, and I think it's becoming more necessary. Schools usually assume you have internet access and require it for assignments. Many services are going entirely online, like banking, flight and hotel booking, etc. Soon enough, I posit that internet access will be assumed, and not having a connection will severely impact your ability to function in society (manage finances, file taxes, book trips, buy anything). Therefore, letting corporations pick and choose which (legal) data can go through their Internet pipes sounds like a terrible idea.

Sunday, March 19, 2017

Corporate Personhood (and Sony)

Corporate personhood is the idea that a corporation is legally a person. It has (some) rights, it can be sued, etc. This is useful in many cases. As the "If Corporations Are People, They Should Act Like It" article points out, it means that the government can't just barge in and seize all of Google's servers, because Google is protected with the Fourth Amendment rights to be free of unreasonable searches and seizures.

There are other legal benefits of corporate personhood pointed out in that article. For example, if a corporation harms an individual or group of individuals, those individuals can sue the company directly. The company has a much bigger pot of money to dole out than all of its executives combined. Therefore, the plaintiffs actually stand a chance to recover what they've lost.

Recently, however, some more dubious rights have been awarded to corporations; namely the right to spend unlimited amounts of money on political campaigns. This has huge social and ethical impacts on society, because elections are now more than ever influenced by which ever groups have the most money. I think awarding this right of unlimited spending to companies (and individuals, for that matter) was a terrible mistake. However, this mistake is separate from the idea of corporate personhood. We can easily have a society where corporations are treated mostly as people, but cannot spend unlimited amounts of money on a campaign; in fact, we've had that society for most of America's existence.

Ethically, the results of corporate personhood overall are more difficult to sort out. As the "How Corporations Got The Same Rights As People (But Don’t Ever Go To Jail)" article points out, corporations are recognized as not having a soul. So they aren't really expected to do the "right" thing, just whatever makes them the most money. This view causes problems, however.

For example, Sony was unethical when it installed a rootkit on millions of devices. The idea was to enforce copy protection. However, this software ate up users CPU and made computers more vulnerable to attacks. Furthermore, it was nearly impossible to remove. I think this is akin to selling a little robot along with the CD that would constantly buzz around the house and zap you whenever you tried to copy something you shouldn't. That seems wildly unethical, and I don't see how that situation is any different than the rootkit version.

However, I don't think Sony was sufficiently punished. Sure, they had to pay a fine, but it didn't seem to hurt them as a corporation. If a person committed that sort of hacking scheme and was caught, they would likely spend much of their life in jail. In comparison, Sony seemed to hardly be hurt. Most of the retribution seemed to come in the form of extra-legal hacks. I think this is the largest problem with the way corporate personhood is dealt with in practice. The company pays a fine and most of the employees at fault don't get punished as they should.

Overall, companies get the same rights as individuals. So shouldn't Sony (and companies like them) have just been more ethical in the first place, like individuals generally are? What Sony did was illegal as well as unethical, but what if it had just been unethical? Would that have been wrong since corporations are treated as people?

I would argue no, under current law. Corporations are treated as people mainly out of convenience, not because they actually act as people. The current law requires corporations to respect the desires of their shareholders first. And the shareholders of public corporations just want their stock to increase in value. So companies are legally encouraged to care only for maximizing shareholder wealth. (I'm taking a Corporate Finance class this semester. This is exactly what we are taught the role of a financial manager is: maximize shareholder wealth).

This idea needs to change if we are to expect corporations to be ethical. Perhaps new regulations are in order. Perhaps more ordinary employees and stockholders should serve on the board of companies. I don't know how to make companies more ethical, but some sort of legislative change is needed if we are to expect companies to care about anything other than maximizing profit.

Sunday, March 5, 2017

Internet of Things

I've had a long-running argument with a friend that the Internet of Things can be a good thing. She is convince that it is the worst idea conceived by man, and will be the downfall of mankind. After reading the articles, I'm starting to see where she's coming from...although I think it can be repaired.

The motivations for the IoT is to make everything easier to access and smarter. Home controls that you can configure online. Dumpster sensors that make garbage collection more efficient. Cars that can talk to the internet. Even things like Echo and Alexa that listen and process everything you say, so they can respond to your wish immediately. You can have control over your own things from anywhere, which is quite convenient.

The main problem is, of course, security. If you can access that webcam remotely, how do you make sure no one else can? If your car can send and receive arbitrary data from the internet, how do you make sure your users don't download a virus? (Or at least, how do you guarantee that the virus can do no harm?)

So what should programmers do about security? In a perfect world, they would develop 100% safe code. But that will never happen. So they should insist on increasing security efforts to their managers, but ultimately, how much effort to spend on security is up to the company. Security requires a significant effort from a team of engineers; the company must decide to allocate these resources.

Because the companies decide how secure a product is, how much security is put into the device is directly proportional to how much consumers would care about a failure, because the more secure the thing is, the more expensive it is.

Therefore, cars seem relatively secure (despite the inflammatory articles). Consumers are hyper-sensitive to car hacking, and companies are even more so. I'm sure Ford has every possible incentive to make sure its cars are hack-proof. Imagine all Fords on the highway suddenly stopping, or veering off the road. It would destroy the company instantly. In this industry, interests are aligned; the industry sees the need to invest in security.

However, most IoT industries don't have this drive. They are pushed to create the "minimum viable product," always pushing down costs. As one article puts it, "Consumers do not perceive value in security and privacy. As a rule, many have not shown a willingness to pay for such things." Security becomes a second thought, because consumers don't seem to care about buying a cheap, insecure webcam.

Which brings up the idea of who is liable when breaches occur. Ideally, the company who made the item. This is obvious in the case of cars, but what if someone just set no password? Or had a really bad password? It's the user's fault, right? But what if the default is no password? Then the company is probably to blame. Also, how do you discover a webcam hack? You know if your car stops on the road, but how do you know that you're being watched? In short, companies should be liable, but the waters get really murky really fast.

I think the government needs to step in and regulate this industry. Have certain security requirements for anything that connects to the internet. (Although there might be an inherent conflict of interest with the surveillance discussion from last week...). The impact of an insecure IoT is frightening. With microphones and cameras, others could hear and see everything about your life. Whether that's hackers or the government, I think that's a dangerous road to walk down. Even the devices that don't have cameras/microphones could (and have been) used as a botnet. Those are two serious problems with an insecure IoT.

Overall, do I fear the Iot? Billions of interconnected devices? Yes and no. At the current pace of security, yes. I would not buy a smart home, webcam, Echo, or Alexa. I think security has the potential to improve, however. If meaningful strides are made, perhaps overseen by a new government agency (a few administrations down the line, I guess), I would trust a web of objects. But not today.

Sunday, February 26, 2017

Snowden

I initially thought Edward Snowden was a traitor. It started because I didn't think that recording the metadata of phone calls was that bad. I had a belief that the government just wouldn't do anything bad with it; "big brother" arguments were all fiction. What harm would the government try, anyways? Target protesters? This is America, not a totalitarian state, I haven't really ever heard the government denounce protesters before. They seem to respond "Well Ok I guess people care about that, time to make that a priority."

Over the last few years, though, I've revised those ideas. Better to make it impossible for the government to collect such data; if the data can be abused to better the government at the expense of the people, it probably will be, eventually. Rather than trust the people of the government to do the right thing, we should create a system where it is impossible for them to do the wrong thing.

I've also mostly decided that the collections the NSA were doing went too far. What remains more troubling to me, however, is their loose interpretation of Section 215 and the fact that Congress was not properly aware of the situation. That seems dangerous to me; the executive branch gaining too much power because it refused to inform the others.

So in that sense, Snowden was justified. The NSA was collecting a level of data that I think was too much, and on top of that they were collecting it illegally and without the consent of the legislative branch of government.

However, Snowden diluted this story with some major missteps. Instead of leaking only information pertaining to the phone metadata, he dumped millions of documents on the media. He went to the media before he went to Congress. He also fled the country to seek asylum in China and Russia.

He leaked too much data. There was too much information there to make a lasting story. The phone records information stuck, but also got somewhat lost in a debate over the rest of it. There were less impactful surveillance schemes that were probably wrong but distracted and diluted the main message. Foreign surveillance information was also outed in these documents, detailing U.S. spying on adversaries and allies alike. This caused real harm to U.S. relations, and drew attention away from the phone records story and more attention to Snowden being a traitor.

He gave the information to the media. The idea was they would be less biased in what to show and what to keep secret, but (as the "Yes, Edward Snowden is a Traitor" article put it) "society has not appointed journalists or newspaper editors to decide these matters, nor are they qualified to do so." They're top priority is not the public welfare, but selling news. They held some items back, but arguably published more than they should have, and also improperly redacted items in some cases. Also, there was information too secret to report. That information now rests on the media's less secure servers, and was read by reporters without security clearance.

He fled the country. Some people have labelled him a coward for this, for breaking a law but not sticking around to try to prove himself justified; for not facing the consequences of his actions. I'm not sure where I stand on this; it's easy to criticize someone for not sacrificing themselves, but then again would I decide to go to jail as a traitor? I'm not sure. However, there is a tradition of people who knowingly break the law (e.g. flag burning) or whistleblow against their companies (e.g. Roger Boisjoly) and accept the consequences. This seems to fly in the face of that tradition.

Also, he carried sensitive U.S. information to adversaries who have complete control over him. For example, Russia could easily pressure him to reveal government secrets; if he doesn't talk, they can extradite him.

In summary, what he did was obviously illegal and partially unethical. There were malicious secrets kept from the American public as well as a lack of Congressional oversight. It was ethical to reveal these, regarding the collection of phone metadata. However, the three main points outlined above were not ethical; ideally he should have leaked much less data, sent it to Congress instead of the media, and remained in the country to try to prove himself in court.

Do the benefits outweigh the harms done to the American public? Hard to say. One article mentions that being aware of possible NSA surveillance probably spurred tech companies to encrypt more of their users' data, trying to avoid a "big brother" scenario. Americans were more aware of the possibilities of espionage, but a Pew survey showed that not many people overall thought worse of the NSA. Laws were eventually passed to forbid the NSA from collecting phone metadata, but the rest of the Patriot Act remained in effect. And there were real harms in terms of relationships with allies. And maybe terrorists will be more careful about how they communicate, but I find it difficult to imagine they weren't careful before. All in all, I think it's about a wash as far as the public well-being goes, but it's really hard to tell.

Personally, the whole discussion has made me more aware of government surveillance. I went from the idea that "if you have nothing to hide, you have nothing to fear" to a much more cautious "We really shouldn't give the government something it could abuse in the future."

Thursday, February 23, 2017

Hidden Figures Podcast Reflection

First off: We made a podcast! Recording it was much easier and fun than I expected, and I didn't hate the sound of my own voice. Which was a strange experience, because I used to abhor recordings of myself. Editing it was much harder. Conversations that I thought were really coherent weren't. I wound up putting in little bloops for when the conversation switched significantly (and I cut out a bunch of stuff in between), but I'm not sure that was the right decision. It was really fun to edit but took a lot more time than I expected. Overall, it was a great experience.

Moving on to the meet of the response:

The main obstacles women and minorities face are established groups that are prejudiced against hiring and advancing them. Also, the general society they grow up in may implicitly or explicitly try to teach them that STEM is a men's world, so they are discouraged from being interested. 
One reason this might be so challenging to break is that engineers hire other engineers that look and think like them with rigorous technical interviews that accidentally maximize People Like Us bias. (I've covered this extensively in a previous blog post).

I don't think famous role models are important. I don't think I ever had one. The Mythbusters would be about the closest I ever had to a popular role model. However, my dad was much more important. He works in Computer Science, and he would discuss work at the dinner table once in a while. I never really understood what was going on (I distinctly remember a conversation where I had no idea why a computer would need a "clock cycle"), but he was interested in his work and seemed to like it. Even if I couldn't understand the problems, they sounded intriguing, and so did his process for solving them.
So I don't think popular role models have as much of an effect as people seem to think - at least, not to me. I think what's more important is someone close to you to encourage you to try out the field. Also, now that I think of it, when I was trying to decide between joining Science or Engineering, both my parents pushed towards engineering, since that's what they did. So they must have had a significant impact on everything else in my life that helped guide me to choose STEM.

Sunday, February 19, 2017

Challenger

(Note: writing this as a blog is still weird. I feel like I need to say: "I know the Challenger is a random topic but I have to for a class" despite the fact that I know that only the professor and TAs will ever read this. Oh well. blogs.)

So, the Challenger disaster. I only recently learned that engineers were against the launch before it happened. I knew that the O-rings failed, but I didn't realize that it was so predictable that they would fail.

What were the root causes? Some of the articles made it sound like all the engineers were clamoring for the launch to stop, but management refused to listen to them. I think it was more subtle than that. It sounds like there were a lot of communication problems, and at least two different parties involved (NASA and Morton Thiokol). Also, in hindsight, engineers complaining about the part that caused the disaster seems ominous, but at the time, it was an O-ring. One of thousands of parts to an incredibly complicated rocket. How do you weight part one has a significant problem and which one is engineers needlessly fretting? Basically, management weren't incompetent idiots. I'd wager they were weighing many different possibilities, and the O-rings didn't strike them as particularly dangerous.

But that's not to say there weren't problems. That shuttle never should have launched. It sounds like there needed to be clearer communication. Someone refused to sign off on the launch. That sounds like a huge red flag, but it was ignored; his boss signed off.  The engineers had data, but didn't represent it convincingly. When they brought up arguments, they were quickly dismissed. I think the managers were allowed to get into a structure of groupthink. They too quickly disregarded the views of their underlings, and were probably too focused on not delaying a heavily watched launch, messing with the schedules of millions of viewers and the first civilian astronaut. I think the root cause was the system; there needed to be an established way for a concerned engineer to attempt to block the launch. If she/he is willing to go through that much trouble, something must be wrong, and the arguments should be heard.

Roger Boisjoly is an interesting case. He didn't share his concerns with the public beforehand, but did in the investigation afterwards, which technically isn't whistleblowing since it's after-the-fact. I still think he was justified, though. The public needed to know about NASA's flawed system, so that NASA would be motivated to fix it. It was more whistleblowing about managers ignoring data than whistleblowing about the accident itself, and ignoring warnings is a serious problem.

However, this additional oversight didn't happen. In 2003, Columbia disintegrated. Why? Maybe the story of Roger Boisjoly didn't become popular enough; everyone only remembered the O-rings. Maybe the company's retaliation worked, and discouraged other engineers from speaking out with their concerns again. I think the retaliation is the worst part of this all. The public (and the government) needed to know that warning signs were ignored, so they would be heeded in the future. Punishing him was counterproductive.

Also, whistleblowing is worth it, even if it destroys your career. Doing so has the chance to save lives or benefit society while damaging the company you work for. That can get you fired and make it difficult to hire you, but keeping quiet is unethical.

Sunday, February 12, 2017

Diversity

I think the general lack of diversity in the computer science industry is an issue that needs to be addressed. What's the best way to address it? Hell if I know. But it's a thing, and I don't think should be.

Focusing on gender, some articles have addressed the idea that on average males are better suited to the more engineering-y fields for various reasons. I think this carries some weight, but I think it only explains a small part of the gap. (Also, on-average is key here. As Hari Seldon would say, statistics and Psychohistory cannot predict the thoughts or actions of individuals. -Asimov's Foundation reference.)

I think the larger factor here, however, is unintentional discrimination. Some evidence for this: the number of women in computer science has been declining. That suggests non-evolutionary reasons (unless we're devolving really, really fast...somehow).

I think this discrimination explains a large part of the diversity gap, too. We've read before about how the computer science giants "hire only the most perfect-est hire imaginable. Ditch 100 perfect ones for the one that is even better." But that tends to maximize for unintentional bias. You want to hire the guy you really connect with, which basically means you're much more likely to hire someone very similar to your culture. So if you're a white male you wind up hiring other white males because of People Like Us syndrome.

Another factor might be access to computers. Colleges assume you know your way around a computer already. Can you imagine asking basic Windows questions in a fundamentals of computing course? "What's a file? What do you mean by 'double click'? Control-what-delete?" I'm sure there are lots of other skills about using a computer that I just take for granted, since I've had one all my life. Typing takes time to learn. Is moving the mouse intuitive, or does it take a while to master? I honestly don't remember. But those are the basics. There have to be thousands of things you can only learn through practice and familiarity.

Lower income students are less likely to have had time to get familiar with computers, so they're already at a disadvantage in the field. According to the "When Women Stopped Coding" article, girls get less access to computers than boys do, even if they display interest. Also, it seems like companies expect you to have been coding since you were a child. That's really difficult if you're sharing a single computer among a family.

So the culture of the people making hiring decisions and economic status have an over-sized effect on diversity in the computer science industry. The first because "only hiring the best" accidentally creates a focus on hidden biases, and the second because access to your own computer could be critical for developing your skills early on.

So what should be done about the situation? I don't know. Blind interviews have been suggested to try to eliminate some forms of bias. I think it goes without saying that Breitbart's idea of a cap was terrible. Harvey Mudd seems to be doing a good job encouraging women to pick up the field by making it more interesting and accessible to them, especially if they aren't as familiar with computers.

This isn't the just state of affairs; change is required to even the playing field.

Sunday, February 5, 2017

Immigration

Alright, a general disclaimer here: I don't know what I'm talking about. I will happily defer questions of immigration to experts in the field.

That being said, I have some extremely controversial thoughts on immigration in general. Mainly, I don't really see why it should be limited at all. Side note: My job isn't directly impacted by immigration (yet), nor has immigration directly impacted someone that I know. So I'm coming from a very privileged standpoint here.

That being said, my thought has always been, if there's someone out there willing to do your job for cheaper, let them. I mean if it's you, it sucks that you lose your job, but if you don't lose it, you're perpetuating an injustice. You're taking that job from someone else who is more desperate for work than you are. Denying work to immigrants in general is giving privilege to Americans for no right other than being born in America.

But I've had this discussion before, and it seems like that's just what nations are supposed to do, make their own population better at the expense of others. And so paying its own workers more for labor than those of foreign countries is just part of being a nation.

Only allowing a certain number of immigrants in is valuing citizens more just for being citizens. Again, I guess this is the point of nation-states, but I feel like it doesn't get explained that way very often, and I kinda disagree with it (While reaping all the benefits from it at the same time). I just can't think of a good reason why someone who wants to come here and be productive shouldn't be allowed to. I mean they "take jobs away" but don't they also buy things? Does increasing the population necessarily decrease jobs for everyone? I would think the increase in population would also increase the number of jobs.

Maybe here, finally, is where we get to the heart of the matter. Maybe most immigrants are low-skilled workers. IF that is the case, then allowing unlimited immigrants would create a higher proportion of low-skill low-paying workers, which I think isn't great for an economy. (There are quite a few "if"s and "I think"s in that bit...)

So the H-1B program was created, to try to lure only high-skilled workers. High-skilled workers make more money and buy more things, which generates more wealth overall? Although reading that sentence again just sounds like trickle-down economics...I'm going to say that in general, a larger proportion of your population being high-skilled workers is probably a good idea, and acknowledge that I'm way out of my league here.

So what's the point? Basically, America wants to maximize the well-being of its citizens, at the expense of others. (I think this is just what nation-states do). So it limits immigration to avoid getting too many low-skilled workers. However, it would like to "brain drain" other nations, stealing their best and brightest, also in this line of reasoning H-1B was created. It seems likely that companies have recently been exploiting the H-1B process to hire unskilled labor, however. So I would argue that, although the H-1B process does a good thing overall, the requirements should be made stricter.

Sunday, January 29, 2017

Interviews

So I tried just writing in paragraph format, but I wound up talking all over the place and answering none of the questions in the writing prompt. Therefore, I decided to structure my blog to directly answer the questions asked in order to stay focused. Sure, it's less of a blog-y format, but whatever, I don't like writing blogs anyways.

What has your interview process been like?
So I've actually done only a few interviews. I thought that was kinda weird, until I read Joel Spolsky's "199/200 Applicants Can't Code" article - linked in the "Why Can't Programmers.. Program?" article. He hits a lot of topics, and at one point says this: " I know lots of great people who took a summer internship on a whim and then got permanent offers. They only ever applied for one or two jobs in their lives." That's me. I applied for a summer internship at ViaSat, got a permanent offer, and was happy with it. So I've done very little interviewing.

I've been in 4 interviews, ever. The first was for a minimum wage job at Legoland. (The fact that I even count that shows that I don't do this much). The next was a technical phone interview for the ViaSat internship, then a mostly general interview with Altera, and finally a practice technical interview with professor McMillan (for Software Engineering). So 4 total, but none of those really count. The ViaSat one was just an internship, and only over the phone. Altera, I already had the ViaSat job, so I wasn't hugely invested, and they wanted more of a hardware guy anyways. The Software Engineering one was great, but just practice.

Of those, my favorite by far was the technical interview with McMillan. Maybe just because he explained exactly how I did and what every part of the interview was supposed to be doing. There was the use of a whiteboard, but it was "draw out a diagram of how you would build a program to these requirements," not "invert a binary tree." That was my least favorite part of the interview, but it seemed fair.

What surprises you?
One thing is that I usually feel pretty terrible coming out of an interview. I tend to evaluate myself worse than the person giving the interview does. So I'll come out of something thinking that I failed it, then get an offer. On the practice one, I thought that I did just ok, but the professor says I nailed it, somehow. So I guess the idea that anyone would want to hire me after an interview surprises me.

What frustrates you? 
Deep technical questions about C. No, I don't know what the volatile keyword means, I never needed to! But also I don't really talking about myself or group projects either, so I don't know what I want. I guess I don't like the whole process. Writing code is ok, but I feel like I'm making some terrible mistake the whole time. And I've only ever been asked to write really simple programs, none of this Google whiteboard "invert a binary tree" nonsense.

What excites you?
Getting a question right? But during the interview, they almost never congratulate you on that. It's always on to the next one. So really the only exciting part is hearing back?

How did you prepare? 
I read my resume and tried to guess what types of questions they'll ask about it. I mostly focused on group work stuff. I think the most I ever prepared was for the phone interview, and even that I think was only a few hours. (I prepared completely wrong, I focused on group stuff, figured I had the technical stuff down, only to be asked about the "volatile" keyword in C, and lots of stuff about C++ that I was really rusty with. Goddamn polymorphism. Why the hell is it called that? The name makes no sense. We're computer scientists, we don't have to use Greek naming, we can call it ThatThingWhereYouCastTheChildPointerToItsParentAndItJustKindaWorks.) Anyways. I probably should have brushed up on Polymorphism.

How did I perform?
Legoland: got the job! yay!
ViaSat Internship: got the internship! Yay!
Altera: Never heard back. Oh well.
Professor Practice: I did really well! Yay!

What do I think of the "general interview process?"
There's a lot of different processes. McMillan's was great. Started with weeder C questions, then to data structures, then memory discussion, then to mildly challenging C function, then to writing a program block diagram on the whiteboard. All of which seemed fair, and each of which evaluated a specific part of what I knew. The ViaSat one was I think a little too technical? I don't know how I passed. Maybe I was supposed to fail them all, but I managed a few and so that was great? The Altera one was meh. Mostly general stuff, with a really basic function at the end (either Fibonacci or factorial). The Google whiteboards sound terrible; everything I hear says "read the entire cracking the coding interview textbook if you even want a chance," which sounds like a pretty bad way to filter out new hires.

One thing that is mentioned in a few articles is the idea of having someone work with someone else in the company for anywhere from a day to a few weeks. That sounds like a much better kind of way to filter out new hires. I feel like it's basically impossible to judge a person from that snap encounter of an interview, but actually doing work means you can get a sense what the person will be like. Maybe have a basic "weeder" interview to make sure the person can actually write code, then put them on a team for a day to a week.

Is it efficient?
In the sense of money invested vs. money at risk (for hiring the wrong person), I'd say yes. If FB is paying >100k per year for each employee, it makes sense to invest a lot up front and try to make sure you get the best person for the job. If you hire the wrong person, it could be difficult to lay them off, and that's a waste of 100k yearly.

Is it effective?
Maybe. It seems like Google, FB, etc. really do get the "best" talent--or, more likely, a a subset of the best. The problem is that the subset of the "best" that they select probably suffers heavily from the People Like Us problem.

Is it humane?
Ya, probably. The ones I have been in, definitely. (At least, no more inhumane than finals). The whiteboard algorithm problems described at Google sound cruel, but I wouldn't say they're inhumane.

Is it ethical?
I would say not. As one article mentioned, selecting only those you see as an absolute perfect fit maximizes for "People Like Us" bias. One possible effect of this may be the gender gap in our industry. And passing up perfectly qualified people is unethical, one article even mentions that an "injustice" may have been done.
But there are also ways to do interviews that are ethical. The industry right now is tending away from them because the market is apparently on fire. It sounds like the best way is coding with a future coworker for a day or more. However, interviews that ask structured questions can work as well, as long as they try to judge them as objectively as possible. Also, I think the worst thing to do is to throw the interviewee into the "algorithm lottery."

Sunday, January 22, 2017

Does the computing industry have an obligation to address income inequality?

The idea of an entire industry having an obligation to do some sort of social good is interesting to me. How can you trust it to actually have the public good in mind? I know that if the cable or oil companies unveiled some grand scheme to alleviate poverty, I would be extremely suspicious. So how can we expect the computing industry to be free of these same biases? The entire point of companies is to make money. Sure, software companies market themselves as more altruistic, but are they really? What makes them better than every other industry?

Perhaps their incentives are better aligned. Oil and Cable companies aren't really positioned to do much about poverty. They provide their service, and that's it. Computer companies, however, are more poised to experiment. For example, Google makes cool things because Google's revenue is entirely based on advertisements, which means the more people that use Google stuff, the more money Google makes. That's why they can make free things like Google Maps and self-driving cars.

It's tempting to think something along the lines of "Google so loved mankind that it gave them Google Maps. What else can they do for humanity?" But Maps was created with the goal of creating revenue, not improving society. Down this line of thinking, we really shouldn't expect the industry to address income inequality (or some other societal issue) out of the kindness of their hearts. Industries and corporations don't have hearts, and shouldn't. Their job isn't the welfare of the public, and it shouldn't be. That should be left to nonprofits, individuals, and the government.

Overall, there's two main points here. The first is whether industries should be obligated (or even trusted) to attempt societal or political good. The second is whether the computing industry has sufficient technology to make an impact.

For the first point, as outlined above, I don't think they are obligated (or should be trusted) to impact society on their own. The government, however, can provide motivation.

But what about the second point? Does technology exist that could fix some major issues? I think so. For example, I think that access to education for all is a great step forwards, like Khan Academy, MIT's OpenCourseWare, and even stuff on YouTube like SciShow and CrashCourse.

But I think it could go further. Some software giant could probably create fantastic educational software, if they dedicated the resources to it. But, getting back to the first point, this shouldn't be the industry's job. The government or a non-profit can hire the industry to do that task. But the possibilities should be examined so that the powers that be can decide what software projects are the best investments.

Y Combinator decided to examine one such possibility: Universal Basic Income. They're running an experiment to see if UBI could make sense by giving 100 families free money, and seeing what happens.

I think this is how the industry should approach societal issues. Consider a possible software approach, run some preliminary tests, then pitch the idea to a funding body. That body can then run some more research, and possibly fund the company. This puts the incentives in the right place, coming from some funding group with society's best interests at heart, not from the industry itself.

Thursday, January 19, 2017

Parable of Talents

Some New Testament parables are weird, and I flat out disagree with many of them. I have real problems with the Prodigal Son and Workers in the Vineyard, for example. But the Parable of the Talents is interesting. It has a lot of ideas still relevant today. For example, to whom much is given, much is expected. The man who makes 5 talents and the man who makes 2 talents are treated exactly equally, despite one making 2.5 times more. This is because he was given 2.5 times more. Also, money is distributed based on ability, not heredity. Another, more uncomfortable message is that the rich get richer (through use of investment banking, no less!) and the poor get poorer. The rich invest what they are given while the poor does not for fear of losing the one piece of money and enraging the Master. So there are lots of interesting undertones still relevant today. The "Master" also seems to be a jerk in some sense. He says "I reap where I have not sown" which are basically the words of house Greyjoy in A Song of Ice and Fire (Game of Thrones), "We do not sow." (The Greyjoys are basically a bunch of ravaging pirates who refuse to value anything unless they killed someone over it--"Paid the iron price"). So creating this Master who demands much of his servants and reaps benefits he had no part in creating is interesting.

But what does any of this have to do with computer science? Perhaps the master is like managers, doling out resources based on ability and expecting returns. They're not directly sowing the fields, but organizing workers to sow them, and reaping the rewards. Those who are given few resources and fail to increase them are laid off, I suppose. Perhaps we are the masters and the computers are the resources; there's only so much time or computing power we can give each project, and if the project fails to deliver, it is terminated.
Perhaps this is more an example of results-oriented thinking. Rewarding people based on what they make, and not how they make it. This might be unethical in Computer Science, there are many ways to write terrible code that gets the job done, but is impossible to update and maintain. 
This parable even has a sense of Moore's law, that you should inherently be able to double what you are given, and doing less than that is failure. This field and the technology has been exploding over the past few decades, but it can't keep that up forever, right? Quantum physics says there is a limit to how small we can build things. Perhaps, in the parable, can the master really expect his servants to double his money every time? Surely they will sometimes fail and lose money, else he would soon become the richest man in all history (doubling money adds up really fast). But perhaps we are at the place in history, where we have seen a rapid increase in resources (computing power) over decades that cannot continue forever. This is like the master who sees most his servants doubling his money, and so he probably expects them to continue doing that, although realistically they cannot.

Introduction

I'm Jacob Kassman, and I'm studying Computer Science at the University of Notre Dame. I really, really don't like writing stuff publicly, so this whole blogging thing should be...fantastic. So why am I doing this? Yay CSE 40175, Ethics and Stuff. I know no one will see this outside of class, but they technically could, which feels rather strange to me.
Ok, so I dislike the idea of blogging. What else? My interests. LEGOs, coding projects, and video games are probably the main three. I've been building LEGOs longer than I can remember, playing video games longer than I should, and messing around with code sometime when I'm not doing those.
Why am I studying Computer Science? Great question, writing prompt! It makes logical sense and you can do just about anything with it. Furthermore, it's fun and I can't stop, so might as well major in it.
What do I hope to get out of this class? A blog, apparently. Also, more than just "hacking is bad, don't steal people's data," but more blurred lines than that. I would like some set of guidelines to navigate the blurred lines.
For example, I imagine Facebook and Google can use some fancy algorithms to figure out a lot more about you than what you explicitly tell them (which is already quite a lot). Probably more than people intend those websites to know. Is that ethical, or a form of unethical hacking? Another issue that's always interested me is that software is shipped with known bugs. These bugs are so minor that they would never cause a real issue, and would cause much more time than they're worth to fix. Still, this seems weird to me, selling an explicitly faulty product doesn't seem perfectly ethical.