A place to be (re)educated in Newspeak

Saturday, November 17, 2012

Debug Mode is the Only Mode


There has been a fair amount of discussion recently surrounding some of Bret Victor’s talks and blog posts. If you haven’t seen these, I recommend them highly - with a grain of salt.

These pieces make important points related to programming and programming environments, and are beautifully done. 

They also relate to education and other matters which I will not discuss here.

Because of their exquisite presentation, they’ve elicited far more attention than others making similar points have garnered in the past. This is a good thing.  However, beneath the elegant surface, troubling questions arise.

The demos Victor shows are spoiled by the disappointing realization that we are not seeing a general purpose programming environment that can actually work these miracles for us. Instead, they are hand crafted illustrations of how such a tool might behave. It is a vision of such an environment - but it is not the environment itself. Relatively little is said about how one might go about creating such a thing for the general case - but there are some hints.

We should take these ideas as inspiration and see what one might do in practice. I expect this is one of the things Victor intends to achieve with these presentations. 

Victor recognizes that many of his examples depend on graphical feedback and don’t necessarily apply to other kinds of programming.  However, his use of traces and timelines  is something we can use in general. In one segment, the state during a loop gets unrolled automatically by the programming environment  - morphing time into space so we can visualize the progress (or lack thereof) of the computation.

This specific example might be handled in existing debuggers using a tail recursive formulation of loops - without tail recursion elimination! Then the ordinary view of the stack in a debugger could be used - though the trace based view may have advantages in terms of screen real estate, since we need not repeat the code. Those advantages will apply to any recursive routine, so adding an unfolded view of a recursive call (or a clique of such calls) is a small concrete step one might want to investigate.

Traces that show all the relevant data are intrinsically connected to time traveling debugging, because we want more than selective printouts - we want to be able to explore the data at any point in the trace, following the object graph that existed at the traced point where ever it may lead us.

I firmly believe that a time traveling debugger is worth more than a boatload of language features (especially since most such boatloads have negative value anyway).  

When I first saw Bil Lewis’ Omniscient Debugger, I tried to convince my management to invest in this area. Needless to say, I got nowhere.  

The overall view is that a program is a model of some real or imagined world that is dynamic and evolving. We should be able to experiment on that model and observe and interact with any part of it. One should be able to query the model’s entire history, searching for events and situations that occurred in the past - and then travel back to the time they occurred - or to a time prior to the occurrence, so we can preempt the event and change history at will.

The query technology enabled by a back-in-time debugger could also help make the graphical demos a reality. You ask where in my code did I indirectly call the invocation that  wrote a given pixel. It’s a complex query, but fundamentally similar to asking when did a variable acquire a given value.

There is a modest amount of work in this area, some of it academic, some commercial (forgive me for not citing it all here), but it hasn’t really taken off. It is challenging, because programs generate enormous amounts of transient data, and recording it all is expensive. This gives a new interpretation to the phrase Big Data. Data is however central to much of what we do, and data about programs should not be the exception. 

A related theme is correlating data with code by associating actual values with program variables.  One simple example of the advantage of having values associated with variables is that we can do name completion without recourse to static type information. We get the connection between variables and their values in tools like workspaces, REPLs, object inspectors and (again!)  debuggers, but not when viewing program text in ordinary editors or even in class browsers.  

In Newspeak and Smalltalk, developers sometimes build up a program from an initial sketch using the debugger, precisely because while debugging they can see live data and design their code with that concrete information in mind. You’ll find an example of this sort of thing starting around 19:10 in Victor’s talk, where an error is detected as the code is being written based on runtime values.

The Self environment shows one way of achieving this integration of live data with code. Prototype based languages have a bit of an advantage here because code is tied to actual objects - but once we are dealing with methods that take parameters we are back to dealing with abstractions just like class or function based languages.  

You might even be tempted to say that Javascript, being a prototype based language, is a modern incarnation of Self. Please don’t drink and drive; at best this is a cautionary tale on the theme “be careful what you wish for”.

We need a process that makes it easier to go from initial sketches to stable production code. We’d like start from workspaces and being able to smoothly migrate to classes and unit tests. This is in line with the philosophy expressed in this paper for example.

It seems to me that the various tools such as editors, class browsers,  object inspectors, workspaces, REPLs and debuggers create distinct modes of operation. It would be great if these modes could be eliminated by integrating the tools more tightly.  There would always be a live instance of your scope associated with any code you are editing, with the ability to evaluate incrementally as you edit the code (as in a REPL) and step backwards and forwards as in a time traveling debugger.  The exact form of such a tool remains an unmet UI design challenge.

All of the above holds regardless if whether you are doing object-oriented or functional programming (a false dichotomy by the way) or logic programming for that matter.

Tangent: I'm aware that the notion of debugging in lazy functional languages is problematic. But the need for live data and interactive feedback remains. And once a interactive computation occurred, the timing has been fully determined. So while stepping forward may be meaningless, going back in time isn't.

We should stop thinking of programs as just code. The static view of code, divorced from its dynamic extent, has been massively overemphasized in the PL community. This needs to change, and it will.

28 comments:

Steve Wart said...

Bret's post is inspiring and extremely well-crafted, but daunting. When I look for the smallest kernel of something to build on, the piece that sticks in my mind months after reading this is the sidebar comment "Spreadsheets rule because they show the data".

Most of the PL work I've seen eschews visualization as irrelevant or the responsibility of an external "IDE" (notable exceptions include Smalltalk). But it's astonishing: of all the "killer apps" in the past 30 years of personal computing, the standouts are spreadsheets: Visicalc created a market for the Apple ][, Lotus 1-2-3 created a market for the IBM PC, and Excel was instrumental in the dominance of Microsoft Windows.

Here we are 30 years later, and most of us are still typing ludicrous cell-oriented formulas (at best), while companies that can afford the luxury delegate the more complex spreadsheet tasks to "end-user computing" groups.

A spreadsheet would be a wonderful hosting platform for most of these ideas, with the desirable feature of being immensely practical. Instead of building text editor upon text editor with syntax highlighting, I wish developers would invest their PLT skills into hosting their language in a spreadsheet. The tabular presentation is such a natural fit for array-based functional languages like APL and J, I'm actually quite confused as to why this approach isn't more common.

Javascript visualizations are eye-catching, but it's an expensive way to express ideas for non-designers. Tables are more conceptually efficient than linear text, they easily support functional and relational ideas, and they're familiar.

This is something I've been thinking about for years. I don't know why I haven't done anything sooner, but it's never too late.

Gilad Bracha said...

Spreadsheets have their place, but they are far from perfect. There are horror stories about companies with critical computations locked inside giant spreadsheets that are impossible to understand and maintain.

Gilad Bracha said...

Lest I be misunderstood, I should add that integrating civilized programming languages and spreadsheets is certainly a worthwhile area to look at.

Craig said...

Hi Gilad-- You say that when you tried to interest your management in back-in-time debugging, you were met with predictable disinterest, and what work has been done has languished from expense. Yet you seem convinced that the tide will turn. What are some visions you have about that? Is it simply that the expense goes down, or (I hope) do you also imagine other pivotal developments?

Gilad Bracha said...

Craig,

Progress in PLs is gated not by technology but by people. PLs (an dthe tools around them, like IDEs and debuggers) are cultural artifacts, and they change slowly simply because humans have difficulty dealing with more rapid change - often to the point they simply refuse to change even if it is in their own interest.

So the biggest change is simply generational. What some managers at Sun could not digets in 2004 say, may become digestible in a another place and time.

Of course, technical solutions help, but this area has mainly languished due to people failing to appreciate it.

I do believe that the mainstream is moving forward, ever so slowly.

John Cowan said...

Spreadsheets are unmaintainable and error-prone because the formulae are hidden and can be overridden with fixed data at any time. However, Lotus Improv solved that problem back in 1991 already, and Quantrix Modeler provides the same solution today, with many modern bells and whistles. Unfortunately, though Quantrix is very enjoyable to use, it's proprietary and pricey (though you can get a 30-day free trial). If someone could put together some open-source competition for it (perhaps on top of a LibreOffice engine), it would have the potential to transform the spreadsheet world forever.

Steve Wart said...

Document-oriented storage of data is the problem, not the concept of a spreadsheet.

In late 2006 it was common practice at major investment banks to use Excel to store the risk parameters for credit derivatives, each trading desk having its own format. Needless to say this was out of control, and it still is.

What happened next was perhaps not entirely the blame of the lowly spreadsheet, but it was predictable and it cost millions of people their jobs and livelihoods.

There is really a critical social need for high-quality tools that do not lock companies into proprietary data formats. If management isn't receptive, they can be replaced :)

Gilad Bracha said...

There is nothing wrong with the concept of the spreadsheet but there is plenty wrong with Excel and its ilk. Over reliance on these is a recipe for disaster, but it is as much symptom as cause.

I've even read arguments that excessive use of APL contributed to the crash but I'm skeptical.

I am confident human folly will continue regardless of what PLs and tools we come up with. The fact that better tools are not accepted is rooted in that folly - not the other way around.



Heath said...

GDB supports reverse debugging.

Jason Olson said...

I think we can take inspiration from other creative endeavors. In many ways, it's important to think about the process of going from idea (sketching) to completion (finishing). Since I'm a music guy I'll use that as an example.

When creating a new pieces of music (let's say for publication), the final format is going to some software similar to type-setting (like Finale or Sibelius). But when you first start working on the idea, you're likely not working on it in Finale/Sibelius as that is too final, it's too inflexible. First you may sit at a piano and simply sketch out some words, chords, melodies onto some manuscript paper. Then you may move ideas around as you start to get the shape of the tune together. Then when the idea has come together, you can put it into final notation and star the finishing process (polishing).

Of course, I'm sure I could use a building/construction analogy as well, but those are over-used :).

The challenge I see with some programming languages, is that you are using the same language with the same rigid constraints to do your sketching as you will use to do put together the final form. In many ways, I think this just makes it harder to sketch. It becomes more difficult to rapidly try out a bunch of different ideas and throw out the ones that don't work.

What I find intriguing in the paper you link to (on Gradual Abstraction) is the idea that, over time, you can introduce more constraints as you come closer to the final form. When you are still sketching, the things that aid you in finalizing the design are hindrances. So why not simply remove those constraints while you are sketching out your ideas?

Sean McDirmid said...
This comment has been removed by the author.
Sean McDirmid said...

I've been working in this area for more 5 years now, its nice to see it start getting more attention. To be honest, I was kind of miffed when I first saw Bret's talk, but he brought a lot of new UX/PX ideas that really shined some light on the problem. I agree that the barriers are not really technical, though I'm not unconvinced that programming model changes won't be needed.

I expect some interesting discussions two weeks from now.

Gilad Bracha said...

Jason:

I have reservations about multiple notations, but I do agree with adding constraints gradually. These two considerations imply a minimum of built-in constraints in the language, and rich tool support to gently and controllably highlight potential errors.

An example is optional typing. More generally, in Dart, we've changed most compilation errors to warnings for this reason - so as to avoid constricting the programmer's workflow.

Gilad Bracha said...

Sean:

Looking forward to it.

Steve Wart said...

Gilad: "I've even read arguments that excessive use of APL contributed to the crash but I'm skeptical." Do you have a reference for that?

I know of one large investment bank that implemented its credit risk management system in Smalltalk, and emerged from the crisis relatively unscathed. Another used Objective-C, and suffered very badly. For interesting historical reasons many of these systems are not built using mainstream software technologies, but I haven't come across APL (I have seen K, but its use was deprecated years before these events unfolded). Of course no programming language can seriously be considered responsible for a large-scale systemic failure, although people are always looking for scapegoats.

Deliberate and accidental subversions of intent play a large role in human folly. I agree it will never be eliminated, but an important goal of language design is to reduce errors and clarify intent.

Gilad Bracha said...

Steve:

Yes I do have a reference:

A Demon of Our Own Design, by Richard Bookstaber. AT page 43 there is a section entitled "The APL Cult".

In many ways it is a fine book, though I'm not quite done with it. However, that particular section is a lesson in what happens when even very smart people get out of their depth. The comments on interpretive languages (an inane concept to begin with) are utter nonsense, especially wrt loops. It is clear that the distinction between implementation and language was lost on the author.

That said, APL is a terrible notation for iteration, and it may be that this was a real problem in that context (using the A+ dialect of APL I gather).

http://www.amazon.com/Demon-Our-Own-Design-Innovation/dp/0470393750/ref=sr_1_1?ie=UTF8&qid=1353269649&sr=8-1&keywords=bookstaber+richard

Gilad Bracha said...

Heath: Took a look at the GDB link, thanks. This is progress, though Linux only. The usual issues of speed & size come up, as well as the question of how to query the history effectively.

I know that .Net supports what it calls historical debugging, and there are commercial Eclipse based tools as well. And there are several research papers I'm aware of.

I think that part of the reason we have not seen this become standard practice is that there are still challenges in terms of performance and UI. I hope to study further examples to get a good sense of how well the various systems out there work and what is needed to get this to be standard practice.

Most important, we need to build awareness of the potential, so programmers demand this as basic right.

stuaxo said...

I'm pretty sure some console emulators implement drainsing by keeping state periodically and the changes inbetween

Gilad Bracha said...

stuaxo:

Yes, there are several such systems and people keep pointing me at more. The challenge remains to get these to perform well enough during execution and perform well on interesting queries. The two tend to trade off against each other. Also, many of the replay tools just don't support nice interaction and sophisticated queries. That said, we need more developers aware and asking for this to drive progress and adoption.

Unknown said...

A bit late to this thread but you might be interested in the work we have done in the recording & playback of partial software execution behavior in the form of a near real-time discrete event simulation engine that recreates an alternative runtime universe for one or more JVMs.

http://www.jinspired.com/products/simz

The following two articles give a good idea of the approach taken and the benefits it has afforded.

http://www.jinspired.com/site/mirroring-mindreading-and-simulation-of-the-java-virtual-machine

http://www.jinspired.com/site/changing-space-and-time-in-software-execution-the-future-is-simulated

One of the challenges that faced previous incarnations of somewhat similar goals of recording & playback was performance overhead which we addressed via an adaptive profiling (filtering) mechanism that automatically determines the essence of the execution. This has allowed us to apply this simulation on a much grander scale in terms of parallelism and distribution. At least that is what I believe our benchmark and real-world usage demonstrates.

Michael said...

I've been pondering your points and the related topics for a while (obviously, given the article date).

I wonder about the possibility of extending the notion of a time-traveling debugger-as-editor to include the future by using aspects type inference and QuickCheck-like property testing. I have a hazy vision of defining a method body, the system suggesting property tests based on the inferred types as I do so, and, myself, then refining the tests--all in a very tight think, make, explore loop.

Could optional/gradual typing, then, simply be a means of refining the type-inferred property tests?

Gilad Bracha said...

Michael,

That's a really interesting take I had not given any thought to. The relation to types goes both ways - one can use live data to infer types, and use types to generate exemplar data. And one can use tests to generate data. Your suggestion adds to that by letting types help generate tests.

yottzumm said...

http://dsmforum.org/events/DSVL01/carlson.pdf Visual program/macro editor/debugger (reversible) for text documents. Engine in production for 2 or so years @ WPAFB Procurement office. Developed in 1-2 years by a team of 3, one of who was one of 2 in his state for his grade level, another who graduated summa cum laude, and had failed on other languages. One attending business school. Had plans for developing a multithreaded object-oriented stack environment (MOOSE), but funding was cut. Key points to take away: Build domain specific desktop objects or miniapps (calculators for string, number, and date). Also, document, form, branch (if-the-else, loops, recursion, "subroutines"), and table. Then use cut/copy paste into text fields and text areas of cross-domain stuff. Probably not too different from AppleScript could have been, except a visual language was included. Didn't quite add multiuser PbE in 2001, 9 years before such a feature was made public to press. Could have been made much faster by unrolling branches into normal C++ code instead of outputting C++ objects. It was pretty brittle and hard to change. So it didn't change much until the project was over.

yottzumm said...

We would keep several flags available that sped up visual programming, such as not opening an closing levels of scope each time.

Gilad Bracha said...

Certainly interesting work, though a bit off topic for this post. Visual programming works best when there is a good visual metaphor or notation for a domain. GUI builders are a case in point; electrical circuits would be another. I've always been skeptical about general purpose visual programming. In particular, control flow tends to be a problem. Dataflow might be a bit of an exception to that, a slong as its coarse-grain (which fits with your note on miniapps - building blocks which take care of fine detail).

Dmitry Ponyatov said...

Is any work (in HARC or not) is in progress to work in the area of using Knowledge Representation & Reasoning data models as a core of interactive systems?

It is very strange that semantic AI is not under wide research to be used as a base technology for software development. I mean things like representing the whole software system as a huge data structure in a homoiconic manner and manipulate it dynamically without programming languages (one part of structure manipulates another part of the structure). The data model can be arbitrarily selected, maybe some sort of attribute grammar, Smalltalk objects works well but not free from scripting in a language, maybe frame model, something other used in KR&R... Not a Lisp definitely, it's too cryptic for humans 8-)

Dmitry Ponyatov said...

As a sample, most minimalistic webserver represented in a frame model can look like this scheme for the user: https://github.com/ponyatov/itstep/wiki/plot.png
It is an executable data model, represents a tiny software system in the form of an object graph, all execution magic is hidden from the user, but it not only shows the _concept_, but also editable (from CLI, but web API can be used), so the user can change its behavior.
If we want to dive deeper, the model can be extended with other elements which control internal logic, like template files and Flask routing rules. Or even deeper, showing the whole model interpreter core written in Python (in the form of graph built with pyClass, pyMethod, etc frames).

Gilad Bracha said...

Dmitry,

a. HARC was shot down two years ago.
b. Your example, to me, is a visual DSL. As long as there is a clear visual metaphor (in this case the network as a graph) that can work very well. Once things get more involved or abstract, not so much.
c. Have you seen https://www.luna-lang.org/ ? I'm not saying that's exactly what you are talking about (in fact, I can't say I know what exactly you are driving at) nor am I endorsing that work, but it seems related.