x
calculus
Here I detail in exhaustive length ramblings about whatever I'm currently doing.
 
#
Moving along and software nonsense

So my paper is divided into (currently) 6 sections:  introduction, problem description, model formulation, solution algorithms, topological effects, and conclusion.

The goal is to tackle one section each month, MOSTLY in order.  Solution algorithms and topological effects can be reversed and may very well be because I only need to get the model up and running in a brute force solvable mode in order to test out topological considerations.  Solution algorithms are meant purely to speed up calculations and compare heuristic (approximate) solutions to exact solutions, including their run time.  Topological effects are experiments wherein I simply change the topology of my wireless array.

 

And the introduction is done (mostly).  As well as a first draft of my model formulation, even.  I’ll be working on the problem description this week, and doing *that* will let me finalize my model formulation and go back and make tweaks to my introduction as needed.

All in all, I’m expecting to be done with the first three sections there by the end of March, and I’ve already made tremendous progress to it after only 1.5 weeks.  The goal is to submit the paper for a first review before school starts in Fall, so basically the beginning of August.  I’ll need that since I’ll be starting my second paper immediately when August starts regardless of progress on my first paper.  I need to fire out these three dissertation papers as soon as possible so I don’t have to think too hard about them anymore.  However, it’s more that I need to fire out these first two so I can *really* focus on my third, which I described in some detail in my last entry.

That sort of wireless network development is literally an entirely new field, and we’re now deciding to aim for the top journal in the field, and then we’ll consider writing up a book chapter on this sort of development.

In the meanwhile, there’s not much more to learn about the field.  I’ve exhausted most of my resources, now, and while some things I’m certainly not at all sure about, most of what I’ll be doing from now on is just doing.  Turns out, you can actually go pretty far in this area without really having a clue what’s going on.

Anyway, I’m rambling.  I’m avoiding my Time Series Analysis class because although it makes sense, his assignments are bogus and involve us using specific pieces of software and doing incredibly complex things with it where the only tutorial out there covers the most basic point-and-click functions.  We have to encode some normal equations, but he’s not answering how we can do it with this software.

Most people are assuming they need to just code the normal equations in some other software, either from scratch in a language or working with MATLAB or something similar.  This gives us a new set of data points.  The problem, then, is that there’s no way to format the dataset into a format accepted by the software.  Or there is, but he’s not sharing that information with us.  We can’t just make the format identical to the software’s example datasets and then rename the file extension.  There’s no built in function for taking a CSV or TXT or XLSX or any such file and converting it into the correct format.  And the way the software creates datasets is some sort of weird wonky tweaking with the actual structure so it’s all jumbled and looks almost encrypted.

So we’re at a loss.

The actual analysis we have to do is dirt simple.  It’s a Part C in a Parts A,B,C process, and it’s literally verbatim “Do the same things you did in Parts A and B” which were easy to do.  All the problems literally revolve around creating the dataset, and not one single student has any clue what to do.

I hate it when professors dump software onto students and just let them go at it.  This only happens in our Statistics department, too (because of course it does).  Engineering software such at MathCAD, AutoCAD, SolidWorks, etc.?  Dedicated courses for learning them.  Math software such as Mathematica?  MATLAB?  Dedicated tutorials relevant to the project at hand.  Statistics software used by the business school such as SPSS, Minitab, etc.?  Dedicated courses.

Statistics software from the Statistics department such as SAS, R, this nonsense?  “Haha, here you go.  Have fun!”

I remember taking Data Analysis as an undergraduate, being given a floppy disk (WHICH IS STILL NECESSARY because THAT’S the format our IT department is using for SAS and only for SAS) with SAS on it, given an example output, and being told to recreate the output.  This was a regression output, which isn’t a terribly difficult command in SAS (one whole line), but we had to remove certain “standard” outputs and add in certain extra outputs, AND WE HAD TO GO FIND THE DATASET WHICH WAS NOT SAS-READY AND MAKE IT SAS-READY.

All on our own, of course.  Who even uses SAS, anyway?  It’s not more powerful than SPSS or R, to be sure.  Heck, if we’re just looking at pure coding, R has been kind of the standard for how long now?

 

Ultimately, I sent the professor an E-mail, and we had some exchange of words (politely), wherein I explained my frustrations, and apparently everyone else’s because he suddenly moved the homework back two weeks.  So, yay?

I’ve never struggled so much with a piece of software, and it’s only infuriating for one reason:

If you do ANYTHING you don’t want to do, you have to completely close out your project and reopen it and start over.  You can’t save steps in between, you can’t just undo an action, you can’t even do what you want to do until you do several other steps first.  And God help you if you literally do nothing but click the graph you want to look at, because the software immediately assumes that that is now the actual dataset and all analysis will fall from it and not your original dataset (which was hilarious when I started doing regression on residuals without realizing and had no clue why my life was in shambles).

Do 15 steps and accidentally click on something you shouldn’t have?  Haha, you better go repeat those 15 steps.

And this is some professional software, too?  Jesus.  So far it hasn’t done anything that SPSS can’t do.

No bananas - banana?
 
#
And so my dissertation has formally begun

I had several meetings today.  One of which involved the title’s sake.

The goal?  “Given an expectation of demand, how should one build a wireless network array?”

This is basically a lot of things.  The first is to decide what sort of communication/carrier sensing we’re going to use.  Fortunately, there are literally only 6 configurations:  additive, protocol, capture, and three variants of interference range, one of which is a purely theoretical abstraction used only to develop a mathematical framework (which we call interference-none, as in zero interference, which obviously doesn’t make sense).

The additive is the trickiest because you’re modeling with the known realistic assumption that all of your signal towers (“signal towers” an arbitrary term to describe “those devices that can allow demand points to establish a network and maintain connection to it&rdquo are causing interference overall to the network, no matter how far apart they are.  We absolutely live in this configuration when you’re in a region covered by several mobile phone carriers, for example.

The protocol is the simplest to understand and is basically a “looser” form of the additive model.  Basically, if signal towers are too far away from each other, they won’t interfere.  This is how most ad hoc networks are developed.  You generally don’t set up an isolated network under some assumption that it will be too close to another isolated network and cause interference.  Military bases are usually set up this way.

Capture is somehow a very cumbersome mediation between the two.  Signal towers need some sort of “external device,” maybe just some extra techno-voodoo-compooter-chips that keep a very detailed track of all attempts at communicating (or connected to or disconnecting from) across the network, where if certain conditions are met there can be interference but it can be generally safe to ignore those networks too far away.  We also live in this type of network configuration when you consider the constant flow of data from Network A (your internet) and from Network B (your phone carrier) and Network C (those crazy hotspots people make with their own phones from their own phone carriers).

In fact, Capture is typically why you can literally be sitting next to your router and have a crappy signal—there is some interference going on (which is mostly written off as “this linksys router is a piece of garbage” or “my laptop wireless card is a piece of garbage&rdquo by other signals that may or may not actually be a part of the actual network itself.  Networks in general are the other two, but individual devices in general follow this.

Interference Range is a more theoretical construct and has two flavors (and a third I already mentioned that is purely theoretical in which there is no interference), one of which follows closely the Protocol (everything is based on physical location) and one of which follows the Capture (everything is based on wireless communication).  But what the Int-Range model is is the same for all three regardless.  Basically, there are limitations to the communications, but it follows a more “you will not have” rather than “you cannot allow,” in that it’s more of an active choice than a passive effect.  You install the network and you decide “okay, only so many devices are allowed a connection so as to minimize total interference.”  This is a more theoretical construct because while it certainly is very feasible, you can’t easily factor how many devices might need network connection.  You also need to maintain this amount—you are building a network to allow, say, 10 devices to connect.  If only 8 decide to connect, you’re going to be “overblasting” signal and causing interference.

Interference Range is almost exclusive used for very tightly controlled “grids,” like say a big room of super computers that all need to maintain connection with one another and then will use, for example, two external devices (one at either end of the room) to transmit signals to and from the actual room (thus the “network” only needs to account for X + 2 total devices, where X = number of supercomputers).  Because it requires a priori knowledge of the number of devices to work effectively, it is the 2nd easiest network to both build and maintain (and it can be adjusted to account for the installation of new devices or the removal of old ones), where Protocol is easily the easiest.

 

“But with those explanations, we clearly know under what circumstances to build which type of network.”

Well, that’s kind of the mystery, right?  The problem is that while all of these ideas lend themselves well, they’re rarely, if ever, cost effective.  BOTH in terms of “how much money to build” AND in terms of “how to build the network so the demand is satisfied and doesn’t whine all the time about a crappy connection.”

It turns out that, maybe under certain circumstances (and we’ve already seen a few in some preliminary analyses), maybe when you build X network it would be a BUNCH times better to consider building Y network.

Anyway.  These all have to do strictly with “controlling for signal interference.”  Obviously other “types” of networks exist, but they’re done in the context of “what does the network look like” or something else; these six are purely a context of “how to account for signal interference,” which leads to figuring out which devices to purchase (or what parts need to be included when creating a device. though nobody’s doing that because X Company will always just make Y Device Type).

That’s the primary focus of the dissertation.  But other things to consider are of course actual location, whether or not signal towers will or should be mobile, antennae attenuation and direction (and whether or not these should be able to adjust; a tower at the perimeter of a “base” that only provides signal to “inside the base” doesn’t need to change the direction of its antenna, but devices within or that connect to satellites need to be able to adjust themselves, and while these are typically only in terms of centimeters, that’s the difference between a clear signal and a bad one since moving one centimeter can really change the angle of a beam and maybe completely miss a satellite once you go out thousands of miles), flow patterns, etc.

The literature has more or less optimized all of the above paragraph, though, so we don’t need to answer theoretic-specific questions about them.  We just need their results.

Anyway, this is going to be the biggest paper I do.  I have to do two others, but they won’t be as big and aren’t expected to be.

One is just a stochastic (uncertain, time-dependent, decision-dependent) extension of my previous paper (WHICH BY THE WAY WAS FINALLY ACCEPTED pending minor revisions that are almost done I’M SO GLAD THAT CRAP’S ALMOST DONE what a really boring and minor paper).  Excusing my parentheticals, but the previous paper was a deterministic jamming problem (the attacker knows where the demand points are and where they move between time periods).  Not interesting.  Stochastic problems of these types (where the attacker has no freaking clue where the people and the devices are) are much more interesting, both in scope (they are bigger, less intuitive results) and in interest (by the research community as a whole).

 

The other is a two-stage game (bi-level program, as I found out the operations research people call it) where a defender places access points in the first stage to maximize signal and the attacker places jammers in the second stage to minimize total signal (thus we are minimizing a maximization problem, or the Minimax problem).  The problem itself is a curious problem but one that is not too difficult; the real goal of this will be to find a way to modify pre-existing algorithms to make them run much faster and get better solutions (because there is no exact solution to this problem, only heuristic, or approximate, ones).

I’m actually going to be starting on these last two and gradually building up to the first one over the course of the next 2.5 years.  These last two I can probably fire out before the end of 2017 (and that would be fantastic) if I work super diligently (which I won’t).

All in all, I’m expecting good things out of 2017.

 
#
Utilitarianism

I follow the Trolley Problem Meme on Facebook.  For those unaware, it’s a “utilitarianism dilemma.”

The idea is this.  A train is on some tracks bound to kill five people who can’t get off.  You come to a switch knowing that if you pull this switch, the train will divert itself to a different set of tracks.  The five people will be saved.  The catch is, this train will kill one person also stuck on the tracks.

The dilemma is this.  By leaving the train alone, you’re not interfering with cause-and-effect, so you are not (morally) responsible for those five deaths:  they would have died whether you don’t pull the switch or you never even came to the switch.  By pulling the switch, you are directly causing someone to die.

Several people hold the utilitarian approach:  five lives are worth more than one.  However, by changing the problem to a train going to kill five people, and you are on top of a bridge overlooking the tracks, and there is a very large person whose body is somehow able to stop this train, would you push him? the answer changes.  By having physical contact, things are more personal (there was this neat video on YouTube explaining this sort of thing).

I used to be utilitarian.  I think many years ago when I valued intellectualism and intelligence over “just being a human.”  In fact, I’d have remained if I didn’t stumble into a quote on a professor’s door:

“When I was young, I sought and valued intelligence.  Now that I’m old, I seek and value kindness.”

Like most OH-SO-WITTY sayings/comics/images posted on teachers’ doors/offices, I ignored it.  But then I had a class with this guy and saw what kind of a person he was (years later).  I did a double-take on this quote, and I realized something.

People, especially those who are not at all adept with numbers, math, and statistics, have a very “mechanical” viewpoint of the world, as if they are more machine than human.  They see the world in abstraction:  things can be weighed according to their weights.  The needs of the many outweigh the needs of the few (I mean, until politics come in line; then this whole thing is reversed).  They seek to find some strange “optimal efficiency,” min-maxing, max-mining, maxing, and mining their way through life.  “Eat X Calories and make sure your macronutrient ratios are %/%/% for an optimally healthy life,” says this keto diet I’m playing with.  You should be doing X activity Y hours/minutes per day.

“Follow these steps for a best life,” say the people who probably hated following steps in Pre-Algebra or Algebra I.  They’re algorithmic by nature.

But the biggest fault with utilitarianism, and the biggest fault with most philosophies, is that the beliefs are derived like a scientist, observing a system from the outside-in.  Physics (especially thermodynamics) behaves very differently when you’re detached from the system (how we learn most of our physics) as opposed to when we become part of the system.

To the Utilitarianist, I ask this question:

“Would you still give up your life for five complete strangers if it were you on that one track and someone else had to make the choice of pulling the switch or not?”  There’s no real point in holding a philosophy if “things change” just because they now directly affect you.

This is also why identity-labeling is such a mess.  The obvious conundrum is politics:  “Well, I consider myself X Side, but then these other people on X Side make X Side look bad...”  Instead of just being YOU with YOUR ideas, there’s this (almost desperate, often desperate) need to belong.  This is true of society and civilization:  we need to belong to it to participate in it, and we evolved in such a way that we can feel pretty bad when we’re shut out from it.

But we never evolved a strong need to define ourselves with a laundry list of identifying nouns and adjectives.  That comes from vanity and narcissism.  This is why I generally don’t concern myself when someone whose “summary” is a bullet point list of words is mocked from others.

But back to utilitarianism, a utilitarian should fully accept his/her fate as that one person on the tracks with pride:  through their death, others may live.

There’s a more hilarious one on the Facebook I saw wherein a sadistic serial killer sort would feel pleasure from letting those 5 die in a greater amount than those 5 would ever feel if they lived, so the utilitarian should be perfectly accepting of the guy allowing those five to die with glee.

Because the entire point of utilitarianism is that the sum of all happiness should be maximized.  That’s why 5 people should have 5x the happiness of 1 person, and why letting 1 person die is okay.  But if someone can feel 10x the happiness of one person because they’re particularly eccentric, that’s even better (because the guy is only happiest when the most people die.  If 5 people die, we have 10x + 1x happiness with the one person on the other tracks living.  If 5 people live, we only have 1x + 5x happiness, where the guy’s 10x dropped down to 1x ‘cuz he didn’t maximize deaths.  11 > 6, and so there it is).

Of course, I’m a trying-Humanist after thinking hard on that quote.  A Humanist desires to see the good in others and come to solutions rationally.  I merely try to, since I concede that, being a human and thus an emotional creature bound by social mores, I’m going to get emotional about this or that.  Just so long as I try not to hurt anyone in a direct way.

BUT LABELS, MAN.  LIKE I JUST DID THAT TO MYSELF, MIRITE?

Haha, I love deprecating humor (and it continues); that hasn’t gotten OLD AT ALL, WITH EVERYONE TRYING TO CONSTANTLY BE SNARKY AND ONE-UP EACH OTHER IN oh-so “clever” witticisms.

The 90s were all about XTREME, the 2000s were all about gritty reboots (still waiting on that gritty Captain Planet reboot), and the 2010s are all about deprecation.  What an era to live through.

 
#
Time Series Analysis

So, I’m in this Recent Developments in Statistics course, and we’re doing Time Series Analysis for the semester.  They did it two years ago or so with a different professor, and they did a different topic last year.

These sorts of courses, these "Recent Developments,” these Seminars, these Special Topics—they’re very disjointed in not having any real prerequisites.  This is a good thing because people of all backgrounds can get in (mostly those Statistics majors who need another class to satisfy hour requirements, where our Stats department is really lacking).

Unfortunately, we’re an experiment, wherein our professor is trying to turn his notes into a textbook.  Some prerequisites are extremely mathematical (which, for our school, our statistics students are not), and we also need a pretty thorough knowledge of different distributions.

We just got our first homework assignment on Tuesday.  It has 9 problems.  The last two require the use of MATLAB, and the first of those is a pretty simple, straightforward “just do this, make a graph, create a function” that I’m sure I’ll learn how to do once I get myself willing to install MATLAB.  The other problem will require some particular programming technique that I’m sure we’ll go over next Tuesday.

The other seven problems, however, we’ve officially learned how to do them, and man-oh-man, are they a nightmare.

Two typos that, without the corrections, make two problems impossible, and I must have spent about 5 or 6 hours trying to crunch through them.

The nice thing is that every single problem had the exact same first few steps.  If you knew how to set it up, you could at least get that far in all of them (which, admittedly, was about 30% of the problem).  Time consuming and tedious, but it’s there.  The rest is all just “statistical algebraic” manipulation.  Algebraic manipulation means you use certain rules you learned in algebra, like factoring out a common factor (the reverse distributive property) or cancellation of numerators and denominators because of exponents or the like.  Statistical algebraic manipulation is the same, only you’re using your stats rules (like how expectation is linear, so E[aX+Y] = aE[X] + E[Y] ).  But you’re typically doing these things twice because you’re doing it on a time series, so you’re doing it on Cov[big function for now, same big function but from earlier].

Eventually you’re done doing statistical algebraic manipulation and just using normal algebraic manipulation (advanced algebraic manipulation, like cos(x) = +/-1 if x is a multiple of pi).  And we’re doing this all on very, very demanding functions.

No, sir!  First homework, testing out our rules on a few basic functions like aX+Y and then ramping up our skills on harder problems?  Naw, let’s just dive into the hard ones.

So once again I’m in a class that’s consuming a great deal of time.  I keep thinking I’m done with these, but then I wind up in a class that I go “Oh, that’s neat.  I bet that’d be interesting and useful” and regretting it.  I was only supposed to have two more that would take up my time, and thanks to this debacle, I’ve extended it to three.

Oh, well.  I have to fulfill my Stats minor somehow.  I have to take 4 classes in Stats for a minor, and this is my 2nd one.  I plan on topping it off with Multivariate Methods and then maybe some dumb class I can ace because Multivariate, Time Series, and Regression Analysis are the only three useful stats courses for me given where I will likely wind up.  Ideally I can get into Introduction to Spatial Statistics (a split-level grad/undergrad course with a lab) because that would also be quite useful for me (particularly the kriging).  If not, I’ll just take Linear Models or Nonparametric Stats because whatever I’ll be done with it.

No bananas - banana?
 
Calendar

March 2017
1234
567891011
12131415161718
19202122232425
262728293031

February 2017
1234
567891011
12131415161718
19202122232425
262728

January 2017
1234567
891011121314
15161718192021
22232425262728
293031


Older

Recent Visitors

March 28th
google

March 27th
google
ontheway

March 26th
google

March 25th
google

March 23rd
google

March 22nd
google

March 21st
google

March 20th
google

March 17th
google

March 16th
google

March 15th
google

March 14th
google

March 13th
google