July 22, 2013 at 8:35 am (Uncategorized)

All comments on my rant-space have been filtered into the bitbucket until it scrolls off HN, go discuss there :P

Jim writes, commenting under some apparently randomly chosen entry,

Headline ladies and gentlemen – Matlab was designed to be productive for mere mortal engineers, scientists, and mathematicians and has been quite successful for that target audience.  It’s not python, C++, or anything else.  Trashing Matlab because you can’t do stuff that users don’t do anyway is like trashing Excel because it doesn’t do word processing very well.

Sure, the proper use for MATLAB is as the desktop-computer-sized equivalent of a graphing calculator. That’s where it shines, even if other desktop calculators costing zero money shine at least as brightly these days.

The first problem is that the MathWorks actively markets MATLAB as a “general purpose programming language” with all the trappings, something which it is manifestly not. To borrow your analogy, it’s as though Microsoft were putting out promotional material showing all the newsletters and magazines people have laid out using Excel.

The second problem is that if you think of yourself as “not a programmer,” that actually doesn’t stop you from needing to write code. It might stop you from seeing that you need to write code. Which is a condition that infests a whole lot of people.

Your pitch here is really… strange. It reminds me of the curious phenomenon where textbooks pitched at college freshman are titled “X for Scientists and Engineers” and books with actual meat on their bones are more often titled “Introduction to X.” I mean here you come out purporting to be in defense of scientists and the only things you say about them are incredibly belittling! (I wonder if there is some personal insecurity and projection fueling this.) I think you are doing a disservice to scientists and engineers by both belittling their computational needs, and inflating the perceived difficulty of anything that might actually meet their needs.

So let’s give an example of what real scientists need to do. In my line of work, we do quantitative tests of sensory function, also known as “psychophysics.” If you’ve been to the ophthalmologist and had a visual field test, the kind where you press a button whenever you see a flash, and the machine adjusts things until you’re not quite sure if you’re seeing the flash or not, you get the idea.

Now building a psychophysics experiment is much like writing a very simple video game. You’re displaying something on a screen, responding to button presses, keeping score. Granted, compared to most games you care quite a bit more about exactly how monitors and eye trackers are calibrated, how precisely response times are measured, the latency between inputs and updates on the screen, logging everything that happens and so on. On the other hands, the controls and graphics are a lot simpler. But make no mistake, the first thing anyone does when they have a new idea for an experiment is start writing code. And eventually, using the profiler, because in this branch of science we care about milliseconds and never ever dropping a frame, and the MathWorks markets their pile of crap as actually being suitable for real-time data collection. (So don’t bullshit me about a profiler being something no one uses.)

That’s psychology. A “soft” science, with small data. Now talk to people working at CERN, and see what they do with their machines. (Then recalibrate your scale by talking to a geneticist — it speaks volumes about the culture of the “hard” sciences that it’s the bioinformatics people who are actually way more competent at leveraging computers in their work.)

In grad school I witnessed no fewer than three separate failed projects in one lab pitched at building a new experiment controller and data collection framework — for psychophysical experiments. The common denominator among all of them was that they wanted it to be done in MATLAB. Meanwhile, the thing they all actually wanted already existed, that’s how hard their blinders were on.

I’d have written it off as one lab’s damaged culture, if the lab down the stairs weren’t running into exactly the same problems!

That’s what happens when the mythology of “real programming is hard” takes hold. As though programming languages are things that programmers inflict on themselves in fits of masochism! As it turns out the real masochists are the ones who stick to inappropriate tools “because they’re easy” (i.e. because it’s what they know) and painfully push far beyond the reasonable scope of those tools’ abilities.

Let’s be clear about this. Programming languages are labor saving devices. If they didn’t make large classes of computational work easier rather than harder, they wouldn’t exist. Now some are better than others, depending on purpose, and some should probably not be inflicted on people unless they really know what they’re getting into. Some people encounter the wrong language at the wrong time, one that’s not compatible with their goals, and run away. But the response to perceived difficulty is not to retreat to a shittier, more specialized, more incoherent language, even if it comes with a brochure about its grid-oriented word-layout features.

Steve Yegge was annoyed about a different overmarketed underdelivering language, but this adapts well:

With the right set (and number) of generically-shaped Lego pieces, you can build essentially any scene you want. At the “Downtown Disney” theme park at Disney World in Orlando, there’s a Legoland on the edge of the lagoon, and not only does it feature highly non-pathetic Lego houses and spaceships, there’s a gigantic Sea Serpent in the actual lake, head towering over you, made of something like 80 thousand generic lego blocks. It’s wonderful.

Dumb people buy Lego sets for Camelot or Battlestars or whatever, because the sets have beautiful pictures on the front that scream: “Look what you can build!” These sets are sort of the Ikea equivalent of toys: you buy this rickety toy, and you have to put it together before you can play with it. They’ve substituted glossy fast results for real power, and they let you get away with having no imagination, or at least no innovative discipline.

[Language X] is exactly the same. [Language X]’s success is all predicated on its marketing, nowadays; after all, any idiot can see that there are much better languages out there. Take your pick. But [Language X]’s marketing shows you glossy pictures — look at our arrays! Our [toolboxes]! Our clever [handle graphics] and [parfor] and [reference object classes]! Look how [sciencey] we are! And programmers new to high-level programming ideas (including truly new programmers, and also C/C++ programmers who’ve spent months writing buggy char-buffer manipulation routines) see the marketing and think: Gosh. A pirate ship. That’s soooo cool.

If you’re having problems building your model, you might be tempted to buy some construction system that has a few pieces-shaped-vaguely-like-parts-of-your-model on the brochure, and you take it home and you find out all those pieces don’t actually hold together very well. Alternately, you could find a system that has general purpose pieces that fit together better without sacrificing flexibility. Such was the goal behind, to take a random example, Python. I’ve watched people pick up MATLAB as a first language and other people pick up Python as a first language, and guess what? Python is easier for the average student (or the belittled “real scientist or engineer”) to actually use. The interesting thing is that those who started out with Python, also went on to learn other things and be happier.

Permalink 8 Comments

MATLAB can’t read plain text data out of a wet paper bag.

August 7, 2012 at 1:48 am (crap data structures, matlab doesn't talk to anyone but matlab, powerfully stupid graphics, unexpressive language)

I’m working with someone, and they asked for some of my intermediate data. In the interests of what I thought would be maximum interoperability with whatever data analysis system they preferred, I gave them a .csv file.

It looked something like this:


Simple enough, right? For example, if you were using R, and you wanted to do a scatterplot of “sensitivity” versus “target_spacing,” symbols coded by subject and colors coded by direction content, you do something like:

data <- read.csv('dataset.csv')
qplot(data=data, target_spacing, sensitivity, 
      color=direction_content, pch=subject, 

On the other hand, the MATLAB script I got back from this person looked more like this:

[ num, txt, data]=xlsread('datafile.csv', 'A2:Q192');

sublist= unique(txt(:, 5));
spacinglist=unique(num(:, 4));
symlist={'o', 's' '*' 'x' 'd' '^' 'v' '>' '<' 'p' 'h' 'v' '>' '<' 'p' 'h'};

for s=1:length(sublist)
    for d=1:length(dirlist)
        for sp=1:length(spacinglist)
            ind=find( num(:, 3)==dirlist(d) & num(:, 4)==spacinglist(sp) & strcmp(txt(:,5),sublist{s}));
            if ~isempty(ind)
                plot(spacinglist(sp), num(ind, 6), [symlist{s}], ...
                 'MarkerSize', 5, 'Color', colorlist(d, : ),'MarkerFaceColor', ...
                 colorlist(d, : ));
                 hold on

Well, this is fairly typical for MATLAB code as it is found in the wild Give people matrices and they use explicit numeric indices for everything. Aside from the difficulties of making marker shape and color covary with dimensions of the data, you have to open up the file in Excel and count along its columns to see what variable they think they’re plotting (and after some head-scratching it turns out they weren’t, actually, plotting what they thought.)

The factor-of-three reduction in code size on asking R to do the same thing MATLAB does is pretty typical too.

One of the very useful features of R is that you can assign names almost everywhere you would use an index. So, you never have to worry about whether column 4 is “target_spacing” or something else. You just say “target_spacing”.

For example, let’s say you have some nice rectilinear data, like this cross-tabulation of hair color, and eye color, and sex in a group of students:

data <- data(HairEyeColor)
> HairEyeColor
, , Sex = Male

Hair    Brown Blue Hazel Green
  Black    32   11    10     3
  Brown    53   50    25    15
  Red      10   10     7     7
  Blond     3   30     5     8

, , Sex = Female

Hair    Brown Blue Hazel Green
  Black    36    9     5     2
  Brown    66   34    29    14
  Red      16    7     7     7
  Blond     4   64     5     8

This is just a 3-D array, like Matlab’s 3-D arrays (Interestingly, Matlab only added multi-D arrays after someone got fed up with the lack of them and went off to write the Numeric package for Python.) And as an aside, NumPy and R have consistent rules for indexing in N-D (where N can be 1, 2, 3, or more), while MATLAB forgets about 1 dimensional arrays entirely, and as for consistency, utterly screwed it up.

Ahem. As I was saying, unlike an array in Matlab, arrays in R can have nice, human-interpretable names attached to their rows, columns, and slices. You can see them in the printout above, or get and set them explicitly with dimnames:

> dimnames(HairEyeColor)
[1] "Black" "Brown" "Red"   "Blond"

[1] "Brown" "Blue"  "Hazel" "Green"

[1] "Male"   "Female"

An array with dimnames, allows you to access elements by name, not number. So if you want to slice just the blond, brown-eyed people in this sample, you can just say:

> HairEyeColor['Blond', 'Brown',]
  Male Female 
     3      4

That’s the same as writing HairEyeColor[4, 1,], only you can actually see what it’s trying to accomplish.

Now, I wish that you would be able to go a step further and write HairEyeColor[Eye='Brown',Hair='Blue',], and not worry about which order the dimensions come in, but R’s not perfect. Just useful. Actually, you can do that sort of thing with PANDAS, a Python library billing itself as “R’s data.frame on steroids.”

Meanwhile, if you pay an additional tithe to the Mathworks, you can get the Statistics toolbox, whose “dataset” class is more or less R’s data.frame with hyponatremia. (No ‘NA’, you see.)

Anyway, if you ever ask me to remember that “Female” is 1 (in this dataset) and “Hazel” is 3, well, look, remembering arbitrary correspondences between names and numbers is something humans are just really bad at and computers are very good at, OK? If you’re writing analysis scripts and you find yourself flipping back to the speadcheet to count columns… just don’t. Why would you do a job the computer should be doing for you?

Being able to refer to things by name makes your code more likely to work. For instance, in the previous example, a humanely designed system would be able to look at a statement you’ve messed up, like HairEyeColor[Eye='Brown',Hair='Blue',] and come back with "uh, there's no such thing as "blue hair" in this dataset." Which is miles better than coming back with the results you didn’t want.

Okay. Before I had the first gin and tonic and decided to cover a topic or two on the syllabus of Stuff That’s In Every Useful Programming Language Except MATLAB 101, I had this script someone sent me, that read in some data from a CSV file I’d sent them. And they were using numeric indices into the data because they had just loaded in the data as an array, using xlsread, which which doesn’t do anything useful about column headers. But you ought to be able to load each column of data into a separate field of a struct, use the column headers as struct field names, and refer to them by name that way, you know, and that’d be doing pretty good for MATLAB. So I was planning to tweak this code and send it back with a note about “here’s a nice way to do it better and let the computer take more of your mental load” (this person teaches a course on MATLAB for scientists, you see, so I want to slightly reduce the fucking brain damage that gets propagated out into the academic world.)

All you’d have to do is, instead of reading a CSV as a matrix, use the function that reads from a CSV file and uses the column headings to assign the fields in a struct. You know, that function. The one that does the single bleeding obvious thing to to with CSV files. You know, the CSV-reading function. I mean for all I rant about it, people get work done with MATLAB. It’s just impossible that Matlab can’t read the world’s most ubiquitous tabular data format usefully. Right?

Well, let’s try it.

The first thing I find is csvread. Aside from being deprecated according to the documentation, there’s another problem in that it only reads numeric data. Now, some of the columns in my file have things like a human observer’s initials, or categorical data that’s better expressed to humans with labels like “left” or “right” rather than trying to remember which one of those correspond to zero and 1. (R has a built in “factor” data type to handle categorically enumerable data, while MATLAB has bupkis.) So, csvread can’t cut it, because it only handles numeric data. Same problem with dlmread.

Next up we have xlsread. That’s what my collaborator used to begin with. Maybe it has an option to get the column names. Well, it won’t even read the file on my MacBook. Nor the Linux cluster we have in our lab. Ah, see, xlsread only reads a CSV file if it can farm its work out via a goddamn COM call to a motherfucking installed copy of Microsoft Excel, and it only knows how to do that on %$)@&#%%..% Windows. And, even if my computers met those conditions, xlsread doesn’t read a file with more than 2^16 rows. Man, I’ve got more than 2^16 rows sitting here just from asking people to look at things and press buttons. Lord help me if I ever have a real dataset.

CSV, you know, one of the world’s most ubiquitous, plain-text, human-readable file formats.

What next? There’s importdata which purports to DWIM the reading of tabular data. And there’s the “Data Import Wizard” which just turns out to be a wrapper for importdata.

Except importdata doesn’t handle the way quotes are used in CSV files. Even if that weren’t a problem, it doesn’t work at all. It detects that there’s a header row but it doesn’t actually give me the field names–why? Some experimentation reveals that it’s, again, completely incapable of handling non-numeric data in columns — even though it purports to put out separate ‘data’ and ‘textdata’ results! Here’s how ‘importdata’ mangles a perfectly straightforward file:

>> type testfile.txt

>> [a delim nheaderlines] = importdata('testfile.txt')
a = 
        data: [5x2 double]
    textdata: {6x7 cell}
delim =
nheaderlines =
>> a.textdata
ans = 
  Columns 1 through 6
    'Height'    'Width'    'Depth'    'Weight'    'Label'      'Age'
    '95.01'     '76.21'    '61.54'    '40.57'     'Charlie'    ''   
    '23.11'     '45.65'    '79.19'    '93.55'     'Echo'       ''   
    '60.68'     '1.85'     '92.18'    '91.69'     'Delta'      ''   
    '48.60'     '82.14'    '73.82'    '41.03'     'Alpha'      ''   
    '89.13'     '44.47'    '17.63'    '89.36'     'Romeo'      ''   
  Column 7
>> a.data
ans =
   20.2800    1.5300
   19.8700   74.6800
   60.3800   44.5100
   27.2200   93.1800
   19.8800   46.6000

So, it detects the delimiter and the single header row, but it doesn’t give back column names…why? The ‘textdata’ is full of perfectly reasonable numeric strings that haven’t been converted into, y’know, numbers, but some of them have been blanked out. The ‘data’ pulled out a minority of the numeric data but gives you no idea which columns it pulled out for you, and that’s the best that I’ve seen so far.

The File exchange was not helpful. csv2struct was just a wrapper for xlsread (requires Excel on Windows, limited to 65,535 rows.) txt2mat claimed to be ultimately versatile and able to handle mixed-datatype CSV files, but rejected everything I gave to it, unless I threw enough regular expressions at its options that I might as well have written my own CSV parser.

So I ended up writing my own fucking CSV parser. And at several points I got waylaid by things like:

  • textscan will skip over blank fields when they occur at the end of the line (which are common in actual data), and if there isn’t a consistent number of fields (that it doesn’t skip) per line, it will cheerfully forget line boundaries for you. So you need to do conversion line-at-a-time.

  • There’s no good way to convert a cell array of strings to numbers, str2double tries, but it outputs NaNs whenever there’s an empty string, or anything else it can’t convert. So there’s no way to tell whether some converted value is NaN because the file said “NaN” versus whether some value is NaN because the file said “booger police.” See, the thing is, NaN is a value in the IEEE 754 system that is used to represent undefined values, invalid operations, and the like. The purpose of NaN is to signal problems that happened with your arithmetic. NaN is not a missing value marker, unless you really want something to to obscure places where your math is going wrong. (This is why R allows explicitly missing — NA, not NaN — values, in any vector — not just floats.)

  • MATLAB’s version of sscanf can’t apply an alternate locale. If you ever interact with people outside the US you will see some CSV type files with conventions like: “,” for the decimal separator, “.” for a thousands place separator, and “;” for the field delimiter. That’s why the whole LOCALE facility in the C standard library exists. R provides the ability to set the locale for the purposes of reading such a file; whereas MATLAB’s documentation explicitly forbids setting the locale, even in a MEX function.

  • Speaking of MEX functions, I might have gotten this done faster if I had gone that route and done the parser in C/flex/bison like a grownup, instead of expecting MATLAB to be any help at all in doing stuff like converting several strings to numbers.

So, as you see, reading a CSV file into MATLAB entails a whole lot of bullshit.

By comparison, here’s how you read that exact same file in R.

> x = read.csv("testfile.txt")
> x
  Height Width Depth Weight   Label   Age Speed
1  95.01 76.21 61.54  40.57 Charlie 20.28  1.53
2  23.11 45.65 79.19  93.55    Echo 19.87 74.68
3  60.68  1.85 92.18  91.69   Delta 60.38 44.51
4  48.60 82.14 73.82  41.03   Alpha 27.22 93.18
5  89.13 44.47 17.63  89.36   Romeo 19.88 46.60
> class(x$Width)
[1] "numeric"
> class(x$Name)
[1] "factor"

You see? Ask R to read in a table, and it makes a good guess at the appropriate data types and headers, and you can refer to the components by their actual names. This stuff just isn’t so hard.

Permalink 20 Comments

Here’s another way to crash MATLAB!

May 17, 2012 at 8:00 am (crap data structures, dumb memory management)

Now that we know MATLAB’s memory manager has some problems with actually cleaning up too many objects at once, we can have some fun with it!

function pushdown(N, crash)
    %an N of more than about 1000000 with crash=1 is interesting.
    a = {};
    for i = 1:N
        a = {rand() a};

    if ~exist('crash', 'var') || ~crash
        %feed it to matlab's deallocator one piece at a time, or it will choke.
        for i = 1:N
            a = a{2};

First try running:

>> pushdown(1000000)

Then try:

>> pushdown(1000000, 1)

What do you get? I get a hard crash.

So not only can you blow the interpreter stack with deallocation, you can blow the C stack as well!

I mean, a system can get by for some purposes without garbage collection… but not when reference-counting is done this badly.

Permalink 6 Comments

oh wow

May 15, 2012 at 12:54 am (crap data structures, dumb memory management, errors in error handling, thirty misfeature pileup)

You know how errors in destructors are transformed into warnings?

I was doing some comparisons various strategies for filling out arrays in MATLAB. I looked at the docs and noticed an implementation of a doubly linked list, and thought, why not include that and see how badly it performs?

Just pull the code for dlnode out of your own MATLAB help files:

 edit  ([docroot '/techdoc/matlab_oop/examples/@dlnode/dlnode.m']);

then copy and paste it right into a file named dlnode.m on your matlab path, and then try this:

tail = dlnode(0);
for i = 1:510
	new = dlnode(i);
	insertAfter(new, tail);
	tail = new;
clear tail;
clear new;

The output from this overflows the command window buffer so I won’t post the whole thing. Here’s an excerpt:

  In dlnode>dlnode.delete at 64
  In dlnode>dlnode.delete at 64
  In dlnode>dlnode.delete at 64
  In dlnode>dlnode.delete at 64
  In dlnode>dlnode.delete at 64
  In dlnode>dlnode.delete at 64
Warning: The following error was caught while executing 'dlnode' class
Maximum recursion limit of 500 reached. Use set(0,'RecursionLimit',N)
to change the limit.  Be aware that exceeding your available stack space can
crash MATLAB and/or your computer. 
> In dlnode>dlnode.delete at 64
  In dlnode>dlnode.delete at 64
  In dlnode>dlnode.delete at 64
  In dlnode>dlnode.delete at 64
  In dlnode>dlnode.delete at 64
  In dlnode>dlnode.delete at 64
  In dlnode>dlnode.delete at 64

….Yeah, not only is the error you get transformed into a warning, but what an error it was! Linked chains of handle objects pile up destructors on the stack and overflow if too large of a group of handle objects goes out of scope at once. And a goddamn stack overflow gets exception-funneled down into a warning!

Your computer has multiple gigabytes of memory. You can not make a linked list more than 500 items long without breaking MATLAB. Oh, wow.

Well, you can increase the recursion limit, but not far before you get hard crashes.

So how badly does the Mathworks’ own example of a linked list perform, anyway? Well, building a linked list N items long turns out to be between O(N^2) and O(N^3). Wowsers. (For the data-structures-impaired among you, it ought to be O(N).) And it’s factors of ten slower than the next slowest way of building up a list of indeterminate length.

These all — the absurd slowness of object property access, the recursive piling up of destructors on the stack, the exception-swallowing — these are all knock-on effects of the Mathworks’ insane notion that they can do automatic memory management without a garbage collector and guarantee deterministic destruction and support arbitrarily connected reference graphs. Well, a system built under these constraints can do at least one of three things: either it will have accomplished what in the history of computer science has never before been accomplished, or its time and space complexity is going to blow up, or it won’t actually work as advertised.

They might as well have assumed P=NP while they were at it.

UPDATE: It gets worse! 500 was the limit in R2010a, but as of R2011B, there’s twice as much recursion on finalization, so you can only make a list 250 items long!

Permalink 1 Comment

Errors and crashes and namespace inadequacies, just a typical day.

May 11, 2012 at 5:41 pm (a namespace, errors in error handling, my kingdom for a namespace)

I mostly work on MATLAB R2010a because there is a particular library (which is the only reason I am forced to use MATLAB) that only builds on 32-bit, and R2010a was the last 32-bit release.

I recently encountered a couple of bugs in the MATLAB interpreter and wanted to check if they had been fixed in future versions.

Well, launching R2011B, I find a lot of stuff not working. Hell, I try to “edit” a file and it returns me:

>> edit crashme.m
Error using edit (line 66)
Not enough input arguments.

Now, that’s really weird, because obviously I supplied an argument to “edit.” What the hey? I try editing some other files, it works okay. I try invoking edit like edit(‘crashme.m’) and it still fails. Not enough input arguments?

Usually as a programmer there’s a little mental hurdle you have to leap over to even begin to think maybe the problem is with the system and not with what I’m doing After all, I’m just sitting here banging at the keyboard, and presumably if I’ve selected reliable tools, the likelihood that I banged a wrong button ought to be higher than the likelihood that my tools are busted.

Years of experience has shown that when dealing with MATLAB that little mental hurdle does me no good. So here I go debugging MATLAB’s own code.

Now all I know is it failed in line 66 of “edit.m”, which is preceded by this many endifs (uh, the
|| operator, have you guys heard of it?)

    57	                                end
    58	                            end
    59	                        end
    60	                    end
    61	                end
    62	            end
    63	        end
    64	    end
    65	catch exception
    66	    throw(exception); % throw so that we don't display stack trace
    67	end

AAAAAAARGH. NO. Do. Not. Do. This.

As I said in previously the entire purpose of exceptions is to propagate out information about the manner of failure — not to disguise the manner of failure behind a lie. I don’t give a shit if it’s an internal function. Propagate out the actual information so I don’t have to break out the debugger on your busted code.

Well, now the only recourse is to reach for the debugger. This is risky: after all, the mere act of bringing up a file in the editor (as the MATLAB GUI does whenever a breakpoint is reached) might very well call “edit” somewhere along the line, so if I set a breakpoint in “edit”, it might go chasing up its own tailpipe.

So I save my work before proceeding.

Setting a breakpoint there on line 66, I find:

K>> edit crashme.m
66      throw(exception); % throw so that we don't display stack trace
K>> getReport(exception)

ans =

Error using sprintf
Not enough input arguments.

Error in message (line 8)
string = sprintf(varargin{:});

Error in edit>openEditor (line 234)
            errMessage = message('MATLAB:Editor:EditorInstantiationFailure');

Error in edit>openWithFileSystem (line 458)

Error in edit (line 51)
                        if ~openWithFileSystem(argName, ~isSimpleFile(argName))



Ah, we see that the original error, which at first claimed nonsensically to be a case of “not enough arguments to edit” was really “not enough arguments to sprintf.” Interesting. And sprintf was called by… Ah! What we have here is yet another goddamned namespace problem. The call to message is actually reaching my function:

K>> which message

Are you getting why you should not funnel exceptions yet? If the people at The Mathworks had simply not written a try/catch clause there, I would have seen the cause of the problem without doing anything like rolling up my sleeves.

It seems my “message.m” is shadowing some other “message.m.” Now, I’m reasonably careful. When I originally chose the function name “message”, I looked at the landscape of MATLAB’s stupid global search path (because when nothing inhabits anything like a namespace, you have to tread carefully) and found that “message” is only used by a couple of toolbox methods that should be safe as long as MATLAB’s method dispatch works how it’s supposed to (ha):

>> which -all message
/Applications/MATLAB_R2010a.app/toolbox/shared/spcuilib/@uiservices/message.m                 % uiservices method
/Applications/MATLAB_R2010a.app/toolbox/shared/filterdesignlib/@FilterDesignDialog/message.m  % FilterDesignDialog method

Of course I should have expected that Mathworks would leave no pronounceable string of letters available to users, and ensure that anyone’s code would mysteriously break if they had thought to name something “message.” In 2011b, I find,

K>> which -all message
/usr/local/matlab11/toolbox/matlab/lang/message.m % Shadowed

Inspecting “message.m” reveals that it’s a newly added function and that it’s for internal use only. If you had previously written something called “message”…. tough luck. Nor warnings or nothing, just mysterious failures you have to debug.

Mathworks, what on earth are you doing putting more pollution in the global namespace if you supposedly implemented packages back in r2008a? Didn’t you declare back in 2009 that there would be a package you would be moving your internal shit into?

(It’s not been done because the namespace mechanism barely works, is my best guess.)

Anyway, after moving my “message” out of the way…. I can finally open the file in my editor. Beats me why “edit” was choking on giving me an error message in the first place, since it completed successfully. And yeah, the runtime bug I found is still there. Check this out if you want to crash your Matlab:

function crashme()

    function crash1()
        x = crash2();

        function x = crash2()
            x = evalin('caller', '@() eval(''1'');');

Permalink 4 Comments

You’re fixing the wrong thing.

March 7, 2012 at 3:02 am (doing it wrong, lying documentation, trouble with small numbers)

Some folks at Mathworks read this blog. I know because I get referrals from Mathworks internal wikis and bug trackers.

I also know because I’ve seen a few documentation changes. For example, you guys have updated the documentation for “sparse” to reflect that it adds together overlapping indices rather than overwriting them like normal arrays; you updated the docs on “randsample” to reflect that it draws random samples from arrays only if they are at least 2 elements long; you even updated the docs for “getframe” to clarify that you need to turn off the fucking screen saver and walk away from the computer like it’s 1992.

Ahem. You guys are missing the point. Let me repeat from before.

It’s not that MATLAB’s behavior isn’t documented; it’s that the behavior is stupid, and leads to errors when your users (reasonably) assume that behavior would be consistent over array dimensions, or that behavior would be consistent between different but related functions, or that behavior would be consistent within a single function, or that things would just not be busted in general.

I liked the previous, unmodified documentation. It reflected the intentions of your programmers; clearly they were trying to implement the reasonable and useful things they described. It’s too bad you don’t have the wherewithal to finish the job you clearly meant to do and fix the stupid behavior.

Permalink 3 Comments

Cleaning up after yourself: Don’t be a Skinner pigeon.

March 6, 2012 at 12:11 am (errors in error handling)

If you have worked in science, you have almost certainly seen when an experimental rig has a software problem. As an increasing portion of what experimental rigs do moves into software, much and much more of lab rigging involves software troubleshooting. To the extent that your lab rigging requires writing original software (which is, approximately, the extent to which your lab rig involves software, multiplied by the extent to which your work involves doing anything original at all), some of the software you will have to troubleshoot will be your own.

This isn’t such a bad position to be in: It’s often much easier to troubleshoot your own mistakes than mistakes made by other people. One of the reasons is you have a better idea of what you were trying to do. But there are yet things you can do to make it even easier on yourself.

Given that you will write programs, and given that you, like me, are imperfect, your programs and rigs will break sometimes. When they do, it is a great help to have some form of error handling in your programs. Now, the phrase “error handling” for a lot of people conjures up ideas about software that knows how to compensate when the disk crashes, or the network cable comes unplugged, or a transient gamma ray zaps a bit in your memory.

This might be true if you’re working in something like telecom, where you system has to soldier on in the face of machine failure, broken cables, and software crashes affecting other calls. Hardcore!

Erlang: The Movie

These kinds of systems are amazing. Scientific data collection systems are not, and don’t need to be.

I don’t worry about gamma rays or unplugged cables. Such environmental interruptions in the have too many and too unpredictable of causes to anticipate, and they cause far too few of the errors to worry about. So what causes the great majority of errors that might befall my program? I do. By the time I’m ready to use a program in my rig, I’ve generated at least hundreds if not thousands of errors, each pertaining to a previous version of the program that I made a mistake in. Probably a few more happen as I’m beginning data collection.

That puts things in a new light, doesn’t it? You don’t “handle” errors, not really. Error handling is really about shortening the debugging loop. Report the error to the programmer -> programmer attempts a fix -> Restart and try again. Those are the three parts. The shorter this loop, the faster you can get your system working.

And you know what a great benefit a short cycle from trial to feedback is. That’s what most old hands, you know, the academics who gave up on learning anything new right after they switched from Fortran or C to Matlab, that’s the first thing they question when faced with the notion that something else might be better. “Does it have an interactive prompt?” Interactive feedback! A read-eval-print loop! Well, jeez, talk about things everyone has these days — by which I mean, things Lisp had in 1958, and what everyone else had already copied all the way back when Cleve Moler decided to make an interactive shell over FORTRAN numerics libraries. You can’t even define functions at Matlab’s quasi-REPL! This is the depth of ignorance that keeps people using Matlab. But I digress.

The “error handling” needs for scientific data collection (at least in my field) are very straightforward. You don’t want a high-availability system; you want a high-troubleshootability system. If something goes wrong in your experiment, you need to be the first person to know about it. You certainly don’t want your experimental apparatus to continue on blithely doing the wrong thing and collecting the wrong data (if any at all). Error handling is a misnomer: you want your system to exhibit the complete refusal to continue in the face of errors, just report so you can fix then restart.

So, good news. Compared to high-availability systems that languages like Erlang were designed to support, it is a complete cakewalk to write software that complains at the slightest insult, folds up whenever there’s a stiff breeze, and turns itself off:

Which is not to say that I’ve often seen such a system. Especially not one written in Matlab. There is some complexity to the process of shutting down. Claude Shannon’s Ultimate Machine isn’t just a machine turning itself off. If it merely shut itself off, the lid would stay open, the hand would stay there, and it’d be a chore to reset the machine to its original state so that you could reboot it. Actually, most of the complexity in the Ultimate Machine is about maintaining power to the mechanism just long enough to retract the hand into the box, close the lid and be ready to be switched on again. That’s the “restart” part of the report->fix->restart cycle, you see. The less you have to do to reboot, the faster you can make your program work.

Which brings me to pigeons.

You’ve probably sat beside someone on a rig and watched their reaction when something goes wrong in the software. It’s a funny reaction, only made sad because I see it so often.

Shut everything down, power cycle, and reload everything.

Sometimes it reaches such absurd heights that people memorize, and replicate, over hundreds of trials, an exact sequence of powering-on equipment and launching-of-programs, in a stereotypical order, just because it was what got the rig to function one time.

It’s enough to recall B.F. Skinner’s pigeons — the ones who memorized a whole sequence of exaggerated movements, just because they happened to produce that movement sometime close to when they received a reward.

What drives this ridiculous behavior among experimenters is nothing more or less than software that doesn’t clean up after itself.

Maybe a program on machine A leaves a network connection open and so machine B ain’t listening when it tries to connect again. Or it leaves a filehandle open and it can’t re-open the file again until you quit the runtime and restart it. Maybe the embedded software on your hardware spike windower was written by the kind of C programmers who think longjmp is the same as exceptions, so they never released an internal resource.

And maybe superstitious boot sequences are justified sometimes. Maybe that’s the only way to deal with components that don’t report what’s wrong and don’t reset themselves to a state where they can start over. The only recourse to the experimenter is to reboot fucking everything.

That ain’t the way you should write your program, though. Oh don’t be an inhabitant of a Skinner box you built yourself. What a waste of time.

Shorten the reboot cycle.

We’ll explore how your program can clean up after itself, restoring to the ready-to-boot state; and how it’s particularly hard to do when you write it in Matlab.

Permalink Leave a Comment

A bit of schadenfraude

February 16, 2012 at 10:04 pm (linkage)

But it’s also useful. If you want to decide which of two programming languages you’d like to use, why not ask the people who are familiar with both?

Permalink Leave a Comment

Rhetorical question.

July 9, 2011 at 4:10 pm (Uncategorized)

Is anyone providing a useful and innovative service like this for MATLAB?

Surprise me. All the MATLAB users I know are maintaining their own clusters and licensing at great hassle and cost.

Permalink Leave a Comment

Cleaning up after yourself, prologue.

June 22, 2011 at 3:32 pm (errors in error handling, thirty misfeature pileup)

Errors that happen during onCleanup are transformed into warnings? Really?

It doesn’t help that cleanup functions also can’t be closures — they can’t actually respond to data about a resource that was gathered during a program. But first things first.

Hey: if an error happens in my code, that is an error. My program should not continue unless it specifically handles that error. If an error happens while my program is trying to clean up after itself, that means something is wrong and MATLAB should not force my program to blithely continue and wreak further havoc.

I’ve restrained myself from picking too hard on The Mathworks’ decision to promise deterministic destruction for closures and objects, even though it has unacceptable performance penalties. The reason for the restraint is that I can see the argument for object lifecycle management: when your objects correspond to exclusive resources you hold, you do want to have control and guarantees over when they get released.

Well, I just looked into onCleanup and as usual, the Mathworks fucked it up: You cannot write robust programs using onCleanup, because exceptions during cleanup are swallowed.

So I’m going to have to start kicking at the thirty misfeature pileup where MATLAB’s memory management meets its error handling, after all.

There are a number of languages that offer both automatic memory management and exceptions. A few of them are Python, R, Java, and MATLAB. All of these except MATLAB deal with resource cleanup easily with a try/finally statement, which MATLAB lacks, and most also offer some extra sugar in the form of a try-with-resource, which MATLAB tries to do with deterministic destructors, and fails.

One of these things is not like the others:

Python try/finally
“If finally is present, it specifies a ‘cleanup’ handler…. If there is a saved exception, it is re-raised at the end of the finally clause. If the finally clause raises another exception or executes a return or break statement, the saved exception is lost.”
Python with

“That way, if the caller needs to tell whether the __exit__() invocation *failed* (as opposed to successfully cleaning up before
propagating the original error), it can do so.”
Java try/finally
“If a finally clause is executed because of abrupt completion of a try block and the finally clause itself completes abruptly, then the reason for the abrupt completion of the try block is discarded and the new reason for abrupt completion is propagated from there.”
Java try-with-resources
“If exceptions are thrown from both the try block and the try-with-resources statement, then the method readFirstLineFromFile throws the exception thrown from the try block; the exception thrown from the try-with-resources block is suppressed. In Java SE 7 and later, you can retrieve suppressed exceptions”
R tryCatch
“The finally expression is then evaluated in the context in which tryCatch was called; that is, the handlers supplied to the current tryCatch call are not active when the finally expression is evaluated.”
MATLAB delete
“A delete method should not generate errors”

One of these things is not like the others, and the one that fucked up is of course MATLAB.

Permalink 1 Comment

Next page »


Get every new post delivered to your Inbox.

Join 50 other followers