Sunday, January 29, 2017

Interviews

So I tried just writing in paragraph format, but I wound up talking all over the place and answering none of the questions in the writing prompt. Therefore, I decided to structure my blog to directly answer the questions asked in order to stay focused. Sure, it's less of a blog-y format, but whatever, I don't like writing blogs anyways.

What has your interview process been like?
So I've actually done only a few interviews. I thought that was kinda weird, until I read Joel Spolsky's "199/200 Applicants Can't Code" article - linked in the "Why Can't Programmers.. Program?" article. He hits a lot of topics, and at one point says this: " I know lots of great people who took a summer internship on a whim and then got permanent offers. They only ever applied for one or two jobs in their lives." That's me. I applied for a summer internship at ViaSat, got a permanent offer, and was happy with it. So I've done very little interviewing.

I've been in 4 interviews, ever. The first was for a minimum wage job at Legoland. (The fact that I even count that shows that I don't do this much). The next was a technical phone interview for the ViaSat internship, then a mostly general interview with Altera, and finally a practice technical interview with professor McMillan (for Software Engineering). So 4 total, but none of those really count. The ViaSat one was just an internship, and only over the phone. Altera, I already had the ViaSat job, so I wasn't hugely invested, and they wanted more of a hardware guy anyways. The Software Engineering one was great, but just practice.

Of those, my favorite by far was the technical interview with McMillan. Maybe just because he explained exactly how I did and what every part of the interview was supposed to be doing. There was the use of a whiteboard, but it was "draw out a diagram of how you would build a program to these requirements," not "invert a binary tree." That was my least favorite part of the interview, but it seemed fair.

What surprises you?
One thing is that I usually feel pretty terrible coming out of an interview. I tend to evaluate myself worse than the person giving the interview does. So I'll come out of something thinking that I failed it, then get an offer. On the practice one, I thought that I did just ok, but the professor says I nailed it, somehow. So I guess the idea that anyone would want to hire me after an interview surprises me.

What frustrates you? 
Deep technical questions about C. No, I don't know what the volatile keyword means, I never needed to! But also I don't really talking about myself or group projects either, so I don't know what I want. I guess I don't like the whole process. Writing code is ok, but I feel like I'm making some terrible mistake the whole time. And I've only ever been asked to write really simple programs, none of this Google whiteboard "invert a binary tree" nonsense.

What excites you?
Getting a question right? But during the interview, they almost never congratulate you on that. It's always on to the next one. So really the only exciting part is hearing back?

How did you prepare? 
I read my resume and tried to guess what types of questions they'll ask about it. I mostly focused on group work stuff. I think the most I ever prepared was for the phone interview, and even that I think was only a few hours. (I prepared completely wrong, I focused on group stuff, figured I had the technical stuff down, only to be asked about the "volatile" keyword in C, and lots of stuff about C++ that I was really rusty with. Goddamn polymorphism. Why the hell is it called that? The name makes no sense. We're computer scientists, we don't have to use Greek naming, we can call it ThatThingWhereYouCastTheChildPointerToItsParentAndItJustKindaWorks.) Anyways. I probably should have brushed up on Polymorphism.

How did I perform?
Legoland: got the job! yay!
ViaSat Internship: got the internship! Yay!
Altera: Never heard back. Oh well.
Professor Practice: I did really well! Yay!

What do I think of the "general interview process?"
There's a lot of different processes. McMillan's was great. Started with weeder C questions, then to data structures, then memory discussion, then to mildly challenging C function, then to writing a program block diagram on the whiteboard. All of which seemed fair, and each of which evaluated a specific part of what I knew. The ViaSat one was I think a little too technical? I don't know how I passed. Maybe I was supposed to fail them all, but I managed a few and so that was great? The Altera one was meh. Mostly general stuff, with a really basic function at the end (either Fibonacci or factorial). The Google whiteboards sound terrible; everything I hear says "read the entire cracking the coding interview textbook if you even want a chance," which sounds like a pretty bad way to filter out new hires.

One thing that is mentioned in a few articles is the idea of having someone work with someone else in the company for anywhere from a day to a few weeks. That sounds like a much better kind of way to filter out new hires. I feel like it's basically impossible to judge a person from that snap encounter of an interview, but actually doing work means you can get a sense what the person will be like. Maybe have a basic "weeder" interview to make sure the person can actually write code, then put them on a team for a day to a week.

Is it efficient?
In the sense of money invested vs. money at risk (for hiring the wrong person), I'd say yes. If FB is paying >100k per year for each employee, it makes sense to invest a lot up front and try to make sure you get the best person for the job. If you hire the wrong person, it could be difficult to lay them off, and that's a waste of 100k yearly.

Is it effective?
Maybe. It seems like Google, FB, etc. really do get the "best" talent--or, more likely, a a subset of the best. The problem is that the subset of the "best" that they select probably suffers heavily from the People Like Us problem.

Is it humane?
Ya, probably. The ones I have been in, definitely. (At least, no more inhumane than finals). The whiteboard algorithm problems described at Google sound cruel, but I wouldn't say they're inhumane.

Is it ethical?
I would say not. As one article mentioned, selecting only those you see as an absolute perfect fit maximizes for "People Like Us" bias. One possible effect of this may be the gender gap in our industry. And passing up perfectly qualified people is unethical, one article even mentions that an "injustice" may have been done.
But there are also ways to do interviews that are ethical. The industry right now is tending away from them because the market is apparently on fire. It sounds like the best way is coding with a future coworker for a day or more. However, interviews that ask structured questions can work as well, as long as they try to judge them as objectively as possible. Also, I think the worst thing to do is to throw the interviewee into the "algorithm lottery."

Sunday, January 22, 2017

Does the computing industry have an obligation to address income inequality?

The idea of an entire industry having an obligation to do some sort of social good is interesting to me. How can you trust it to actually have the public good in mind? I know that if the cable or oil companies unveiled some grand scheme to alleviate poverty, I would be extremely suspicious. So how can we expect the computing industry to be free of these same biases? The entire point of companies is to make money. Sure, software companies market themselves as more altruistic, but are they really? What makes them better than every other industry?

Perhaps their incentives are better aligned. Oil and Cable companies aren't really positioned to do much about poverty. They provide their service, and that's it. Computer companies, however, are more poised to experiment. For example, Google makes cool things because Google's revenue is entirely based on advertisements, which means the more people that use Google stuff, the more money Google makes. That's why they can make free things like Google Maps and self-driving cars.

It's tempting to think something along the lines of "Google so loved mankind that it gave them Google Maps. What else can they do for humanity?" But Maps was created with the goal of creating revenue, not improving society. Down this line of thinking, we really shouldn't expect the industry to address income inequality (or some other societal issue) out of the kindness of their hearts. Industries and corporations don't have hearts, and shouldn't. Their job isn't the welfare of the public, and it shouldn't be. That should be left to nonprofits, individuals, and the government.

Overall, there's two main points here. The first is whether industries should be obligated (or even trusted) to attempt societal or political good. The second is whether the computing industry has sufficient technology to make an impact.

For the first point, as outlined above, I don't think they are obligated (or should be trusted) to impact society on their own. The government, however, can provide motivation.

But what about the second point? Does technology exist that could fix some major issues? I think so. For example, I think that access to education for all is a great step forwards, like Khan Academy, MIT's OpenCourseWare, and even stuff on YouTube like SciShow and CrashCourse.

But I think it could go further. Some software giant could probably create fantastic educational software, if they dedicated the resources to it. But, getting back to the first point, this shouldn't be the industry's job. The government or a non-profit can hire the industry to do that task. But the possibilities should be examined so that the powers that be can decide what software projects are the best investments.

Y Combinator decided to examine one such possibility: Universal Basic Income. They're running an experiment to see if UBI could make sense by giving 100 families free money, and seeing what happens.

I think this is how the industry should approach societal issues. Consider a possible software approach, run some preliminary tests, then pitch the idea to a funding body. That body can then run some more research, and possibly fund the company. This puts the incentives in the right place, coming from some funding group with society's best interests at heart, not from the industry itself.

Thursday, January 19, 2017

Parable of Talents

Some New Testament parables are weird, and I flat out disagree with many of them. I have real problems with the Prodigal Son and Workers in the Vineyard, for example. But the Parable of the Talents is interesting. It has a lot of ideas still relevant today. For example, to whom much is given, much is expected. The man who makes 5 talents and the man who makes 2 talents are treated exactly equally, despite one making 2.5 times more. This is because he was given 2.5 times more. Also, money is distributed based on ability, not heredity. Another, more uncomfortable message is that the rich get richer (through use of investment banking, no less!) and the poor get poorer. The rich invest what they are given while the poor does not for fear of losing the one piece of money and enraging the Master. So there are lots of interesting undertones still relevant today. The "Master" also seems to be a jerk in some sense. He says "I reap where I have not sown" which are basically the words of house Greyjoy in A Song of Ice and Fire (Game of Thrones), "We do not sow." (The Greyjoys are basically a bunch of ravaging pirates who refuse to value anything unless they killed someone over it--"Paid the iron price"). So creating this Master who demands much of his servants and reaps benefits he had no part in creating is interesting.

But what does any of this have to do with computer science? Perhaps the master is like managers, doling out resources based on ability and expecting returns. They're not directly sowing the fields, but organizing workers to sow them, and reaping the rewards. Those who are given few resources and fail to increase them are laid off, I suppose. Perhaps we are the masters and the computers are the resources; there's only so much time or computing power we can give each project, and if the project fails to deliver, it is terminated.
Perhaps this is more an example of results-oriented thinking. Rewarding people based on what they make, and not how they make it. This might be unethical in Computer Science, there are many ways to write terrible code that gets the job done, but is impossible to update and maintain. 
This parable even has a sense of Moore's law, that you should inherently be able to double what you are given, and doing less than that is failure. This field and the technology has been exploding over the past few decades, but it can't keep that up forever, right? Quantum physics says there is a limit to how small we can build things. Perhaps, in the parable, can the master really expect his servants to double his money every time? Surely they will sometimes fail and lose money, else he would soon become the richest man in all history (doubling money adds up really fast). But perhaps we are at the place in history, where we have seen a rapid increase in resources (computing power) over decades that cannot continue forever. This is like the master who sees most his servants doubling his money, and so he probably expects them to continue doing that, although realistically they cannot.

Introduction

I'm Jacob Kassman, and I'm studying Computer Science at the University of Notre Dame. I really, really don't like writing stuff publicly, so this whole blogging thing should be...fantastic. So why am I doing this? Yay CSE 40175, Ethics and Stuff. I know no one will see this outside of class, but they technically could, which feels rather strange to me.
Ok, so I dislike the idea of blogging. What else? My interests. LEGOs, coding projects, and video games are probably the main three. I've been building LEGOs longer than I can remember, playing video games longer than I should, and messing around with code sometime when I'm not doing those.
Why am I studying Computer Science? Great question, writing prompt! It makes logical sense and you can do just about anything with it. Furthermore, it's fun and I can't stop, so might as well major in it.
What do I hope to get out of this class? A blog, apparently. Also, more than just "hacking is bad, don't steal people's data," but more blurred lines than that. I would like some set of guidelines to navigate the blurred lines.
For example, I imagine Facebook and Google can use some fancy algorithms to figure out a lot more about you than what you explicitly tell them (which is already quite a lot). Probably more than people intend those websites to know. Is that ethical, or a form of unethical hacking? Another issue that's always interested me is that software is shipped with known bugs. These bugs are so minor that they would never cause a real issue, and would cause much more time than they're worth to fix. Still, this seems weird to me, selling an explicitly faulty product doesn't seem perfectly ethical.