I got an A in Stochastic Programming and an A in Regression Analysis. I expected both, though. I only got a 12.19/15.188 on my final stochastic programming report for our project, roughly an 80%. This was higher than expected—I only needed a 55% or higher for an A, and I expected about a 70% or higher. The reason being that certain items on the rubric we simply were unable to reach, and so we got no real score for them.
To be frank, our code didn’t work. It couldn’t, anyway. We were solving an integer program, and the algorithms we implemented were meant for continuous or mixed-integer, as I droned on in a previous entry about. So, when your code isn’t working, using it to attempt to answer some “research questions” and get “preliminary results for the research questions” and “get a valid solution” just aren’t feasible.
But he knew what we were doing and what we were trying to do; this wasn’t just some “do it or fail it” situation. He’s a good professor (which I distinguish from “teacher,” of which he’s a mixed-bag. But he’s my mentor and that’s good enough for me).
With Regression Analysis, I’m pretty sure I had one of those “really high Bs in the 88.9% area,” but because I demonstrated tremendous improvement over the semester, he bumped me up.
How did I improve tremendously?
I stopped going to class. I taught myself 2 chapters before a test, another 2 chapters for another test, and another 2 for our final, where each test was cumulative. So, I taught myself the course in 3 days what he spent the entire semester on. While attending class, he just repeated the book, but he only gave us the “surface information” from the book (our book is divided into the major explanations and then tiny little footnotes at the end of every section that provided theory, caveats, and further, more detailed, more technical explanations of whatever the section was about).
From that parenthetical, after our first “practice test” that I failed, I realized that these footnotes were where the actual problems came from. But reading only footnotes makes no real sense because of how they’re written, so I did have to actually read the entirety of the chapters.
So, I’m sitting on 2 As out of 3 classes. The final class is Statistical Methods for Business. I’m expecting a B.
I have the highest average in the class. I consistently made the top grades, demonstrated a very thorough understanding of the material and all the papers that we read, and I know I aced that final.
However, 10% of our grade is attendance, and I already lost that because there was a time I wasn’t really going to class often (we were covering regression. Guess what more detailed class I was taking [but not going to]?). That was my fault, and I accept that. But that means I could get a 90%, but I’m sitting on a 97-98% before the minus 10% from attendance (I’m not perfect, and I deliberately botched a few homework problems because they were more work than I cared to put in for very, very specific reasons that weren’t just “durr im better than this im a lazy genius lol lmao&rdquo.
So I’m looking at an 87-88%. I would be furious if I got this sort of average as a result of not doing well in class, but I deliberately made the trade of losing a letter grade in favor of having some free time and days off, things I value infinitely more than a baby stats class (because that’s exactly what this was). So I accept it.
However, if I wind up with an A, I will be ecstatic. Either way, 2 As and a B or 3 As for me.
Good for me. Semester’s over.
Tuesday, I took my Regression Analysis final exam. At our university, we are given 3 hours to take an exam, and we’re allowed to reschedule exams if we have more than 2 in one day (a decade ago when I was an undergraduate, this was only allowed if one had more than 3 exams in a day. And that’s because there are only 4 intervals of time an exam can occur: 8-11, 12-3, 3-6, and 7-10, though the 7-10 only happened for special classes, usually classes that were in extremely high abundance, like College Algebra, Engineering Mechanics, and Chemistry Labs, and also only because our university for these classes participated in some national survey wherein our students were also ranked compared to other universities. We’ve actually added Calculus I and Calculus II to the list).
I had the exam from 12-3. I really like 8-11 exams because it means I can wake up, go take the exam, get it over with, and come home and still have the day to me. However, I will gladly take a 12-3 exam over a 3-6. And I’m very glad I’ll never see another 7-10 exam.
The exam was divided into four parts: True/False, Yes/No (is there some difference here?), Simple Regression, and Multiple Regression.
The last two parts were problem solving with a few “explain this in 1-2 sentences” questions. For the first time out of the three exams (only two counted), he made a very fair test, I thought; he didn’t pull questions from footnotes in our textbook, he didn’t give us a screwed up matching section (three exams, and all three times he messed them up with some dumb mistake and then blamed us for not “solving the puzzle.&rdquo, and he didn’t give us these horrendously challenging questions that made us dive into the gritty theoretical understandings of methods (we have a separate course for that where these theories would have been taught properly and then tested on, and so there was never a need to put them in this sort of class).
Ultimately, I think I did pretty well. As I turned in the paper, I saw him grading someone else’s exam so I know I at least got the first half of my Yes/No questions correct. I also know I got “the big problem” correct (every test always had its final problem a very large task) just because I know what needed to be done.
I have only one more exam remaining for my Business Methods in Statistics class, and it is Thursday from 8-11 (super excited!). I’m not sure about that. I have an A thus far, and a high A at that, but the professor has this strange idea that he’s all about giving us an exam designed to take the entire 3 hours.
It will be split into two parts: the first part, the majority, will be problem solving similar to our homework, and the second part will be questions revolving around a specific paper (that he already gave us that we’ve had for a week now, so we can read it and familiarize ourselves with it before the exam). According to last year's final (that he gave us), questions will basically consist of “on page X the author said Y, do you agree with this statement? Justify with calculations.”
I’m not entirely worried about any of the exam. I just don’t actually want to spend all 3 hours on it. I’d like to be done within 2, to be sure.
I am truly thankful I have no exam in Stochastic Programming (thought I’m not entirely sure what he could make us do for an exam since it’s been more a computer programming course).
However, besides these exams, I have two papers I need to finish.
One is for the Business Stats class. I sent it to my professor thinking I was already done, but then he asked if I was submitting it or if I’d like feedback. ProTip: if the grader of your submission offers feedback instead of a submission, take the feedback.
It’s due Saturday by midnight. I’m pretty happy with what I’ve done; a correlation analysis on various features used in the literature to locate botnets, malicious programs that infect computers and have them run some code and monitor whatever you do, to see which features were most important in detecting botnets. Unfortunately, from my data, apparently only 3/10 features were at all useful, and one of them depends on another, so it’s really only 2 useful features. And “useful” is subjective, since their correlations were incredibly low. I also found effect sizes and statistical power, but with such low correlations, I pointed out that these were not really important things to consider anymore.
I was pretty dismayed by my results, but you can’t argue with results, and my paper wasn’t dependent upon my ability to get good results. Only that I used good (or correct) methodologies and gave a good analysis of my results, which I did. But I’ll be editing it some apparently.
My other project is due Friday before midnight for my Stochastic Programming class (in lieu of a final exam), and it’s a partner project on jamming a wireless network with an unknown amount of demand (people trying to access the network). Our model is great, and it has a nice solution. What isn’t great is our attempts at coding up a specific stochastic algorithm meant to solve these types of programs faster than any attempt at directly solving them. He’s doing one method, I’m doing the other.
My method doesn’t converge to a correct solution. His method just plain doesn’t work. Neither of ours should work in theory, but the beautiful thing about this project is he and I are both discovering ways of making algorithms work for our problems, sort of in a “reinvent/discover someone else’s algorithm” kind of way.
Basically, most algorithms for stochastic programs only work with continuous variables, meaning your variable can be 0 or 1 or anything in between (I’m restricting it to 0 or 1 because our problem specifically focuses only on 0 and 1). This is a continuous problem.
The next type of problem is a mixed-integer problem, wherein some variables are continuous and others are purely integers (for our case, only 0 or 1). Nearly every algorithm has been tweaked to great success to deal with these types of problems at a great cost of computational efficiency (time), with heuristics (algorithms that give “approximately close” solutions to the true solutions) being developed in order to increase efficiency at the cost of accuracy. Nearly every mixed-integer problem, stochastic or not, is solved primarily with heuristics because of how incredibly complex it is to have some variables continuous and some not.
The final type of problem is an integer problem. This means all variables are integers only. This is our problem (all variables must be 1 or 0). My partner has a paper that allows him to deal with this sort of thing (in that, there is a “pure integer” version of the algorithm); mine does not, but I’ve found enough in the literature that says if I’m very careful, I can tweak some things and get a pretty decent solution (but, unless the problem is small, I can never get the true solution, so it’s a heuristic).
Fortunately, it’s okay if we don’t succeed, so long as we have all the effort put into it. Our previous update was supposed to require a methodology to succeed, but we still got some 95% of the score even though it didn’t, and we lost 5% not because our code failed but just general write-up things we have to address in our final report.
But we’re basically on the path to getting an A, and most of the groundwork here has been laid out such that I have a really good foundation for beginning my dissertation, specifically my first paper.
All in all, things are looking up with the semester coming to an end. Here’s hoping reality conforms to expectations.
All over my feeds, people are saying “Happy day” or “Happy turkey day” in order to avoid certain connotations with (condemnations against) Thanksgiving.
I’m not sure what the logic in this is. If you’re going to celebrate or acknowledge the day, but you believe the day to be awful, you’re still celebrating an awful thing no matter what you call it. “Shit by any other name is still shit.”
It seems if you have a particular moral stance against Thanksgiving, which is a perfectly valid idea to have, you should do the proper thing: abstain.
But if you’re just celebrating it to “go with the status quo,” that’s exactly the argument people (probably you) shout against when it comes to politics, religion, social change, etc. Which in turn makes what you’re doing “compartmentalization,” which means it’s okay for you because you have a specific reason, but it’s not okay in general even if a specific reason can be tied to a specific instance for all instances.
And “keeping the peace” is a cop-out. If you’re actively keeping things peaceful, then that’s one thing, but there’s a really good chance you’re probably just passively acknowledging the thought after the fact; being around others and eating in a dinner benefits you in a lot of different ways you might not have nailed down precisely but recognize they exist on some subconscious level.
And that’s the beauty of humanity. Things are ultimately largely complex that we can’t fully reconcile anything if we continuously dig down into any two possibly conflicting ideas, and we can’t reconcile them because we don’t need to. Who cares if you’re going to celebrate the day anyway but still maintain some contemporary social stance for or against something? You can get a lot of Likes on Facebook by proclaiming anything (not to say that this is a primary reason something is done...).
But I never cease to roll my eyes at these people who can’t seem to grasp this for other complex issues. Like politics, religion, social change, morality, ethics, etc.
You’re not a machine, so you don’t have to interpret X as binary if you don’t want to. And you’re well within your right to change your mind on a whim. You may have to face some consequences for doing so if there are any, such as people’s judgments against you, but there is nothing ultimately preventing you from doing whatever as you wish on your whim.
So long as you are not directly (this is the keyword) hurting someone else, of course.
Humanity’s complexity is surprisingly simple. It’s our attempts to simplify it that it becomes more complex.
When you stop using the paradigm of “good vs bad,” literally everything out there stops being confusing, contradictory, particular, or anything else that we generally don’t like. If you want a paradigm, try this one:
Those who primarily value the group
Those who primarily value the individual
You don’t need to think too hard about a lot of things anymore, and people’s actions stop seeming confusing, but we should be a bit precise in notation. You have to be careful, though; you’re probably going to want to assume that some things with regards to valuing the group are good and some things that value the individual are bad, and vice versa. Don’t think this way. If someone’s goal is to benefit the group, then keep it within the context of “how does this affect the group and how does this affect the individual” with regards to advantages and disadvantages (which are not inherently good or bad, just maybe not valuing one side or the other).
“Group” refers to an abstract, often vague, sometimes large bunch (though can refer to a singular entity). “The Mexicans” is a group. “The Republicans” is a group. “The environment” is a group.
“Individual” refers to a specific, often named, sometimes small bunch (though can refer to more than one). “Bob and Sue” are individuals. “I” am an individual. “Redwood Forest” is an individual.
“Germany” is an individual, but “The Germans” is a group.
The only complex thing with this paradigm is the initial labeling. Once you have that straightened out, decisions that affect “The Germans” vs the decisions that affect “Germany” become crystal clear with distinct advantages and disadvantages for either one. And if you think about it long enough, you’ll realize that although we idealize the “good vs bad” paradigm, we actually practice the “value individual” vs “value group” paradigm.
I remember when I was a mechanical engineering major a decade ago that I thought statistics was boring. My high school offered it as a class, but after that paltry bit of it and probability I got in my algebra classes, I wasn’t interested. I also never took such courses while I was a mechanical engineering major.
Toward the end of my undergraduate life, I had switched to the math program, and I decided I needed something a bit more practical to help with the job searching, and I opted into this “actuarial concentration,” which demanded I take some economics, probability, and statistics courses. So I took a generic undergrad’s introductory macroeconomics (which was terrible because the professor used all this vague double-speak because we were getting into the big recession and she didn’t want the class erupting into political pseudointelligence, but the cost was that nothing was really specific even if it couldn’t be tied down to some political party), an introduction to probability course that required some calculus, and the real core of statistics, a course called Data Analysis.
My probability course went swimmingly. It was a math-based course, not a stats-based one, and so I did really well and got my A. Data Analysis, though, was exactly what the course suggested; how to analyze data properly within the context of statistics, without making inferences (avoiding inferential statistics means statistics is very hard to really wrap your mind around). I really should not have taken it before I had taken at least some introductory stats class. It would be like taking Differential Equations without having had Calculus: there are just some courses you don’t take unless you’ve had an introduction to it, even if it’s “all just math, anyway, so if you can do one you can do the other.”
I eked out a B, but my thoughts had now changed with regards to statistics. I was sort of interested in it, but only in a very negative sense. I thought it was stupid. Statistics seemed to me to be nothing more than calculation and calculation and calculation and calculation followed by some schmuck’s interpretation of the numbers based on “convenience.” Did any of the software default to a specific confidence value? Let’s conveniently only use that confidence value. Statistical results spit out certain results? Let’s run inferences on only that. Let’s just not go through the process of determining effect size, statistical power (to see if our results even mean anything besides simply existing), or run the correct nonparametric test simply because our software does or doesn’t default to that.
But if you notice, my problems lie in those that use statistics, not in statistics itself.
And it took me forever to finally realize that. In fact, it took this semester, really. I’ve now had more statistical classes than I care to share a stick at, and I’ll be taking probably 3 more pure stats courses (as in, in the actual Statistics department) so I can get some stats minor. Within my business stats class, there is this woman who’s a math major from China, and she has the same attitude towards it that I once had. That it’s just calculation and calculation and calculation and calculation and it’s all pretty meaningless.
But it’s really not, and I thank my business professor of all things for it. Until this course, it really was all that. But our professor is a quantitative analyst who spent the past semester making us read some thirty research papers that basically all say “this is what a bad paper that manipulates statistics to report findings that might not be there looks like, and this is what a good paper that uses statistics properly looks like.”
And that’s been a godsend, really. I decided then I would finally crack open my “let’s attempt to raze my bias by examining the theoretical framework.”
Turns out statistics is a largely complete field. I went from being biased against the field to being biased against people who might use it unfairly, and I recognize that’s not the discipline’s fault. The end consequence is that I enjoy the subject more, even if my pure statistics professor for my Regression Analysis class is pretty bad.
We actually just got done examining a paper in which the author performs an incredible amount of experiments, has a large amount of data, and makes very big conclusions that the data and experiments seem to suggest. It has the markings of a truly great paper.
Turns out his statistical power was very low such that only two of his nearly two dozen experiments had any real meaning. But he never reported the statistical power, and he likely had no real intention on doing so because he likely just didn’t think to. To summarize, his conclusions are only meaningful in about 3-4% of cases in which he got his data. Not very good. In fact, not at all worthwhile to publish a paper on.
In fact, most of the papers we’ve looked at are pretty bad, and that’s been my professor’s point; he wants to explain (and has explained all semester) how we, as future/current researchers, should set up our experiments and model our research so we don’t ultimately wind up being contradicted 10 years down the road by newer research and made a laughingstock in an elementary statistics (because even though it’s a PhD class, that’s exactly what this is) course.