Saturday, November 26, 2011

gcc does binary search in switch (not nested if blocks)

So I started looking at Write Great Code, and one of the examples he offers is a comparison of a switch(x) versus if(x==1) else if (x==2) else if ... there were 1-4 matching and a default case.

so I threw that together into a dummy source file, asked for an assembly output, and looked. I was a little surprised to see this:
movl -4(%rbp), %eax
cmpl $2, %eax
je .L4
cmpl $2, %eax
jg .L7
cmpl $1, %eax
je .L3
jmp .L2
.L7:
cmpl $3, %eax
je .L5
cmpl $4, %eax
je .L6
jmp .L2

It looks like it peeks into our variable (argc gets loaded into eax for this example) and starts a binary search of the possible literal values it expects. I didn't turn on any extra optimizations, so I assume this is just a practical good idea. I feel slightly less dumb (at the machine level) feeding ploddingly literal switch statements to gcc now, though I still feel like a chump typing them.

The nested if blocks follow a more familiar pattern... is it 1? well, is it 2? Ok, is it 3? Hmm, what about 4? Well, I give up, lets take this else else else else action. I guess its nice to know this. It may be a reason only a literal can be used in a switch, that the language designers were expecting this type of optimization on the back end.

My take-away from this is that if there is a higher probability of some options, nested if/else blocks make sense. If the possibilities are equally likely, switch makes sense. If the purpose is to guard against bad things (dereferencing a null pointer) if/else is a necessity, and it's proper that gcc didn't optimize it away (it would be disastrous if it did!)

Javascript and Flash

It's funny how little of the modern web works when you turn noscript on. It's refreshing to see all the broken assumptions developers have (especially the number of sources a script might come from). I imagine that a graceful fallback (fairly sure weblocks does this, and google has a broad range of feature levels for their services) in the absence of javascript would be a usability requirement for any serious site. The number of pages with inescapable flash intro splash screens, or worse still, entire ui's in flash, seems to be on a decline. That's good...

Thoughts on Control Software

I used to work for an electronic controls company, and I recently started thinking big picture about how the software was laid out. When I was knee deep in the details, I had a mind full of notions about what was going on. Fundamentally, I think this is my takeaway.

Object Oriented design makes a good deal of sense when you really are modeling the behavior of physical objects. Many of the basic principles were there, like a powered down smalltalk system. The design tools were essentially object databases which were compiled into microcode for the appropriate hardware. On the large scale (full building to campus scale) systems, the compartmentalization was fairly rational. So to design a building, you divide it into floors or other reasonable real world divisions (maybe an exterior area for all outdoor controls), then divide those into rooms or suites of rooms. Within a room, you add controls and controllable objects (lights in my case, but it could easily have been other items). Some systems provided default behavior for free, which could be customized afterward (a light switch in a room automatically turns lights on and off until you program it otherwise). Some systems did not do this at all, but had a much richer set of allowable programming. For example, the residential system used a separate hardware and software system targeted to third party integrators, while the commercial system was essentially an internal tool. Building the database essentially consisted in reading the construction diagrams and adding each unit and its function into the database.

This model (Object Oriented drag and drop database tool) had its limitations. For example, a common residential control had a large on/off button, which supported a double-tap as a special command, and a raise/lower rocker off to one side. The raise lower were likely to be left alone, since there are few sensible overrides, but double-tap could be adjusted to do just about anything. Just about... It seems one downfall of the OO ideology is that some objects support a different set of methods than others. So while a multibutton keypad could handle fairly complex conditional behavior, this was not allowed in the single button toggle switch, even from double tap. This behavior was not extensible at runtime. Many of the buttons allowed chaining other buttons (sending a keypress event) to allow some impressive logic. Some types of controls did not support this 'dolikepress control key' method. Sadly, the workaround was that it did allow sending rs232 commands from the onboard serial port. I once had to install a few loopback wires to tie com1 to com2 so that the second class control could implement first class behavior (using the rs232 serial command to send a press event to a key which could handle the command I needed). This of course was done with cryptic string literals. Try showing someone how that's done and expecting them to be able to replicate it.

One other thing I saw while I was there was the evolution of the granularity of control. The company moved from a circuit based centralized concept to a distributed item level design (with a mix and some tension between cabling controls to the nearest connected device or back to a central hub). This introduced a new level of complexity (in that rather than addressing a unit as 12, attaching it to wire 3 running back to the hub, it now was indirectly accessed as the control attached to device 20 on link 4, etc) but freed up the electrical topology and cut costs of cabling for installers. I see how on the business end this made a great deal of sense. There also was a general tendency to move away from addressing (via dip switches, in software menus etc) to serial number addressing. Unfortunately, the database design ideas still required a set address/location pair for the design, and some time was spent on each installation associating control serial numbers with database id's. The goal was to make replacement simple, with only removal and replacement sufficient to allow the hub to infer the new devices behavior, but this had problems if there was more than one failed item on a cable (since there could be no unambiguous solution, the system did nothing) or multiple replacements at a time. This led to some interesting hangups where power was removed from a portion of a floor (not uncommon in construction). In the end, some manual intervention in the software was required to overcome this hurdle (in the several years since I left, I imagine this has been corrected, since manpower costs money).

On the whole, although I found the drag and drop available actions easy to explain, but it failed to abstract away the details, since the desire to maintain programmability and flexibility (yes, we can do that) outweighed the need to provide an intuitive and flexible system for end users. Really, the end user only wants the lights to turn on when he hits a button, and the support/maintenance team that inherits a beast of a control system is used to dealing with 10 different hardware control platforms.

I think the BACnet integrators are making a good deal of headway into alleviating this pain for the building managers, though not coming from a ladder diagram world, I found programming with Vizio a little queer. But having one software control system to view AC, alarms, security, lights, and handle all timed control events is a big win for the maintenance staff.

Friday, November 18, 2011

Google Nonsense

So I have this recollection (badly) of something I read, that some people would rather spend a lifetime making a X second process into a Y < X second process, and these are the real agents of change in the world. It's totally botched, google wants to direct me to 20 pages about personal finance, miscellaneously grouped quotes don't help. So I obviously misremember some wonderful comment about engineering perfectionists, and if anyone finds this and would love to help, it would be much appreciated.

Arlo Guthrie

And friends, somewhere in Washington enshrined in some little folder, is a study in black and white of my fingerprints.
...
And friends they may thinks it's a movement.

Why don't more people sing Alice's Restaurant on Thanksgiving? Grr, Sometimes I miss you, Kevin.

What's wrong with jobs postings?

Why not require strong problem-solving skills and desire real-time multithreaded c++ experience? I can see requiring strong communication skills but desiring excellent ones, but this list is indicative of what's wrong with job postings. I would think that a person with 'strong problem solving skills' would be able to identify how they were under-qualified in ASP.net and rectify it quickly, while someone with FIX and ASP experience who somehow was weak in communication and problem solving would fail to correct the remainder of their (doubtless numerous) deficits, since they perhaps fail to identify them.

Were I hiring, I would move ability to communicate, work in a team, solve problems seriously to the top of the requirements list, and figure the rest is trainable for desirable people, in other words, desirable... I understand business is tricky, there is an army of duds out there trying to find a job, and your job's on the line when these things fall through--but, really, who writes these things? Who applies? Liars and folks who won't work there long. Seriously, pay 10-20% less and get someone who will learn things your way, rather than hoping the 'ready day one' candidate who shows up knowing you have only money to offer them (rather than seeking to learn from your long heritage of market leadership) will apply in a timely manner and settle for the proper salary.

Qualifications and Education Requirements:

· Real-time, multithreaded C++ software development using data structures and performance optimization techniques

· Knowledge of Financial Information Exchange (FIX) protocol

· System Life Cycle experience, data storage strategies

· Experience Derivative trading system development.

· Strong written/verbal communication skills – particularly interfacing with customers

· Bachelors’ degree in Computer Science or related field required. Masters preferred

One year of experience minimum

Preferred Skills:

Strong problem solving and analytical skills.
Excellent communication skills.
Professional work ethic and a team contributor.

Thursday, November 17, 2011

Playing around with graphviz

In my data structures class we've been talking about binary trees. One of the example programs used a stack and space counting to display the tree to the console in a very fortran way. I mentioned to my instructor that using dot would be much simpler, just generate the dot file, hand it off to graphviz, and read the svg/png when you've finished.

Initial experiments were unsatisfying, since a node with only one child tends to have a vertical bar straight down, giving no indication which link (left or right) it is attached to. So I threw some color in, with blue indicating a left/less than link, and red indicating a right/greater than link. This was simple and easy to see. I couldn't shake the suspicion that there was a way to push the links to the right sides, and started thinking about making tables within the records with named ports, but this seemed to be more trouble than it's worth, and I didn't understand why the example code worked, but my hand edited version went unrecognized. Then I remembered reading that invisible edges to invisible nodes will make graphviz count more nodes at a level than there are. Observe the 'balanced' image:



and the unbalanced original:


Honestly, I prefer the density of the original to the strange, warped feeling of the corrected version. In many cases, the links still point downward (especially on outer branches pointing in), but some sense of angular proportion can be felt. With the original, most of the nodes at any given depth are bunched together, giving a sense of feel for the completeness of that level, while the corrected/balanced image has (artificially inserted) gaps to create angular distinctions between left and right, which makes seeing across more difficult.

A rather weird twist, I tried only inserting dummy nodes on the left and leaving the right alone to see what might happen. This is from a different data set, so the overall shape is different, but see how wrong this became:

Sunday, November 06, 2011

While I'm at it (complaining)

I really think Marmalade is great (emacs package site). I really wish installed packages weren't shown as installed if they error out. But getting up to speed quickly (starter-kit) is great, and it works everywhere... makes a more consistent ui possible when you work on multiple machines. It's almost as pretty as quicklisp for elisp packages. And again, after an upgrade, I find that ditching as many of the native packages in the lisp arena as possible is a must. Step 1, install sbcl. Step 2, build a fresh and current sbcl, step 3, uninstall sbcl, redirect /usr/bin/lisp to /usr/local/bin/sbcl. Uninstall common-lisp controller, install slime/swank, install quicklisp, and proceed with the rest of the world.

ELPA looks like a nice second if for some reason you aren't nearly as excited about marmalade as I am. Really, including failed packages as installed is my only gripe.

Seriously, Ubuntu?

Every six months I foolishly endure another distribution update from the jokers at Canonical. This fall's oneiric ocelot was no lack of surprises. For once, I didn't revert to classic, and have struggled to work through unity (I think there are people who like this, I doubt they use a trackball). Although I am growing less opposed to the poorly thought out hovering scrollbar, I still am not able to adjust.

My number one complaint, besides the absence of the nice footprint menu with a categorical grouping of installed applications, is the second class status of the terminal, perhaps the one program I use the most. Open a terminal (if you didn't figure this out, it's C-M-t, now switch applications to something else, M-tab, now try to get your terminal back? It's absolutely invisible, like a second class citizen. Clip it to the Launcher (dock). Can't see the window list? Not there... open 20 terminals, try to get to any of them... Curiously, they had to purposefully exclude this, since a dumb old xterm works just fine and is a first class citizen.

Really, I think the move to netbook optimized interfaces is going to leave unhappy dinosaurs like me migrating to a sane environment. I started using Window Maker again on my primary laptop (still on 11.04 since I see no reason to ruin two computers) and apart from manually having to handle pm-hibernate and nm-applet, it's rock solid and a positive environment. I can live with compiz not making my windows wobble while I move them.

I guess it's user friendly to make the terminal a one shot deal, but that's what M-F2 used to do, rather than conjuring a powerless launcher that looks like a heads up display for an action game. It used to call up a 'run' dialog. Hail the 'run' dialog.