I'm much more curious about a programming paradigm that no longer uses text to communicate with computers but instead just directly manipulating data, receiving past, present, and future feedback of how it would change given your manipulations. Or to put it a different way "What if" feedback. "If you did this, the data would change in this way" is visualized across many different dimensions, allowing you to 'feel your way' through feedback where you wish to go.
In other words, you give your computer your input data, and you modify dimensions which allow you to specify what you want the program to do.
To be clear, I'm not searching for specialized interpretations of this "Oh someone did this with typography" or "Oh someone did this with a game" but rather some more generalizable form like "Someone tried to replace Python with an idea like this"
I suppose the nearest thing I can think of is manually modifying the parameters of a neural net but that's perhaps too cumbersome because there are so many. Perhaps if you can put an autoencoder on top of that, and reduce the parameters down to a smaller "meta" set of parameters that you can manipulate which manipulate the population of parameters in the larger neural net?
I'm just really curious if there have been instantiations along these lines (as opposed to code live-running with results on the sides).
I realize this is all quite difficult, may even seem 'impossible' to have some sort of generalizable system that does this for all sorts of programs. I've heard people say it can't be done, and code is the ideal format. I hold that in abeyance, I don't really know, but intrigued to discover those who have a counter perspective to that and have attempted to build something.
Also really curious if you know other similar people to Bret Victor I should check out!
Bret Victor himself has made zero headway. And no, Dynamicland is not it. Dynamicland is still coded in text with no visual representation itself.
Other examples always show the simplest stuff. A flappy bird demo. A simple recursive tree. A few houses made of 2-3 rectangles and a triangle. Etc...
To be even more pessimistic, AFAICT, if you find a single example you'll find that even the creators of the example have abandoned it. They aren't using it in their own projects. They made it, made some simple demos, realized it didn't really fit anything except simple demos, and went back to coding in text.
I'm not trying to be dismissive. I'd love to be proven wrong. I too was inspired when I first read his articles. But, the more I thought about it the more futile it seemed. The stuff I work on has too many moving parts to display any kind of useful representation in a reasonable amount of time.
What I can imagine is better debuggers with plugins for visualizers and manipulators. C# shipped with its property control that you could point at a class and it would magically made it editable. You could then write a custom UI for any time and it would show up in the property control (for example a color editor). I'd like to see more of that in debuggers. Especially if one of the more popular languages made it a core feature of their most popular debugger so that it became common for library writers to include debug time visualizers
Even then though, it's not clear to me how often it would be useful.
- "what if" feedback loops (crudely, "live programming")
- direct manipulation (an old idea but beautifully captured in his projects)
- making logic feel more geometric / concrete
- visualizing and manipulating data, especially over time
- humane interfaces (putting "computing" into the world around us, but without AR)
- etc.
Bret Victor is very much Alan Kay's protege and has unfortunately inherited the curse of people cherry-picking particular ideas and missing the bigger picture.
So as others have pointed out, the only person who may be fully attempting Bret Victor's vision is Bret Victor with Dynamicland. You may also be curious to check out Humane [1] which is a hardware startup founded by ex-Apple people. They're rumored to be shipping a projection-based wearable device this year. This device could potentially be a platform for people to experiment more in the direction of Bret Victor's vision.
[1] http://hu.ma.ne
Edit: This comment is a goldmine: https://news.ycombinator.com/item?id=34485994
——-
There are lots of hobbyists, academics, and even companies inspired by Bret Victor’s talks alone.
I know of at least 2 open source experimental programs that were inspired by specific demos:
https://github.com/laszlokorte/reform-swift
I know there are more too but I can’t find them right now. You could probably find a lot of good stuff just searching GitHub for “Bret Victor”.
There are lots of people in academia experimenting with programming languages and environments. Try searching for papers that cite Bret Victor as well and I’m sure you’ll find plenty.
For a quick glimpse at the academic world without spending hours looking for papers worth reading, I recommend perusing the Strange Loop Conference YouTube channel. There are some interesting experimental programming languages and IDEs out there.
After that project ended I started working on my own attempt called Mech which specifically handles the time-travel and what-if features you mentioned (https://GitHub.com/mech-lang/mech you can play with an early alpha version here: http://docs.mech-lang.org/#/examples/bouncing-balls.mec). I’ve made sure money running out won’t kill this project, so hopefully it’s a fuller attempt.
Someone else posted a link to futureofcoding.org, which is a community that works on these types of projects. You can find a lot more there.
Some of his talks:
Inventing on Principle https://www.youtube.com/watch?v=PUv66718DII
The Future of Programming https://www.youtube.com/watch?v=8pTEmbeENF4
The Humane Representation of Thought https://www.youtube.com/watch?v=agOdP2Bmieg
Media for Thinking the Unthinkable https://www.youtube.com/watch?v=oUaOucZRlmE
Seeing Spaces https://www.youtube.com/watch?v=klTjiXjqHrQ
Drawing Dynamic Visualizations https://www.youtube.com/watch?v=ef2jpjTEB5U
Reactivity has certainly become more popular, and is a standard part of web development now. And ipywidgets are an example of creating manipulatable abstractions in data science.
I don't think there's any way to get away from this abstraction. At it's lowest level, everything is encoded in binary. All abstractions on top of binary are just interpretations of the underlying stream, text being a relatively simple encoding (ASCII table or UTF8's multi-byte structure). Structure data is similar, just multiple pieces packed into one contiguous space. You will always have to build on top of this fundamental, there is no simpler
That being said, I quite like:
* Datasette - https://datasette.io/ - I have a feeling ou could connect a lot of these instances and truly make something interesting there
* Lightable IDE - https://www.youtube.com/watch?app=desktop&v=H58-n7uldoU
* I have at least 2 more but I can't find them in my favorites
I'm trying to make my own as well. Hardest thing is giving myself enough time to do it, but I'm currently starting to structure my life around it.
It shows just how far the Dynamicland concept can be pushed into a hyperspecific feature set customized to fit a single domain -- because they can deconstruct the user experience to that level of detail.
Extending a Tangible UI out to the actual OS itself has been the thing at Dynamicland from the get go but here we see it finally as physical 3d objects participating with realtime digital feedback in the built environment !!
Back to your question, from my naive understanding, smalltalk seems to be an all in one environment. The Glamorous Toolkit [1] seems to be that environment on steroid. I have no useful experience to share though.
Usually a combination of hot reloading, good intellisense / autocomplete / quick docs, debugging.
I did a talk and built the animations with Manim (mathematical animation library) and at the time you had to render the clip and play it each time you wanted to preview it, causing a significant delay (10-15s) between each change and what it looked like rendered.
It was unbearable (but finished the project). Afterwards, I put together an environment using p5.js that allowed instant feedback, even at a specific point in time. I also threw in an in-browser editor so I could keep working on an animation on my phone as I was doing a lot of walking around that time (usable, but barely).
This was the result of that project:
1. Whenever you zoom out of the code-level, you lose granularity and thus flexibility and power.
2. In order to gain expressiveness, you can constrain the domain, but again you lose flexibility to implement what you want and how you want.
3. It's difficult to avoid losing the ability to express things in general ways whenever you switch to visual or physical representation of code.
4. A lot of the ideas you might have end up being more simply represented by code, and more easily manipulated by way of text and keyboard.
5. A lot of things end up just being superficial wrappers over code. Superficial in the sense that they only hide surface-level complexity (e.g., reducing the visual volume of large code blocks).
6. Catering interfaces to novices often hampers experts.
There seem to be a lot of trade-offs. I don't know if these are laws per se, but they seem difficult to break.
What interests me particularly are new ways to create general purpose programs using methods that more efficient and more intuitive, but it seems like a really difficult task bound by near-inescapable trade-offs.
I want a lot of things that Bret Victor wants from computers.
I journal my ideas on GitHub in the open, see my profile for links.
I want a GUI that is self referential that tells you how it is built and allows the backend to be visualised. This is similar to React Component browser extensions that let you see the live tree of elements.
Observability of software is verry difficult. I want to see train tracks animations of software running.
https://twitter.com/i/lists/1617421345121353733
Many people are doing really great and innovative work in the space. They are just mostly researchers and hard to find.
Hope to find more from threads like this.
Edit: downvotes?
These are not the sweeping, fundamental changes that Bret Victor envisioned, but we are collectively moving toward more interactive programming. Imagine modern web development without hot reloading.
Clojure is the language where I see this happening most, and which is seeing the most expansion toward "visualization and interactivity as part of a the backend dev experience".
You just have to bind sliders to variables, and tie your outputs to graphical plots and there you go.
The big challenge is to come up with a large library of pre-built canonical graphical representations for different programming abstractions and being able to wire them seamlessly together.
Essentially, I want programs to be less dependent on low level constructs much like today we don't depend on pointers or registers (assembly) to do our work these days. The idea that we can't have larger abstractions handled by compilers or runtimes seems silly to me.
I was one of the backers of https://www.kickstarter.com/projects/ibdknox/light-table/des... and have wanted to see progress in this domain for a long time.
One thing I would add to the conversation is that one of the most potent ways to move this discussion forward is to create technical demonstrations of how this sort of interface could work, presented as video. It's completely unimportant if the functionality is actually working, so long as you disclose this up front.
The goal is to give people with less imagination and hopefully more technical acumen an opportunity to roll up their sleeves and maybe work on making it real.
Even well before Bret Victor's time, there were tools for visual programming. I have been using LabView to maintain data processing in an optical laboratory.
"Interactive Analysis and Optimization of Digital Twins": https://doi.org/10.18154/RWTH-2022-07066
Direct link to PDF: https://publications.rwth-aachen.de/record/849852/files/8498... (94 MB) Unfortunately it's in German, but there are many many illustrations and pictures, as well as some english quotes.
I am still amazed how some of Victor's very basic principles (e.e. "Show the data, show comparisons!", immediate feedback loops, ...) are always so essential/fruitful in generating amazing solutions to certain problems...
Aside from my work, I think Processing's "Tweak Mode" is another very good "real-life" example you might want to check out: See e.g. http://galsasson.com/tweakmode/
whats funny is Bret’s message wasnt actually “you should go make direct manipulation ui’s”. it was “you should have design principles” and direct manipulation happened to be his baby (to the point where he went off to do dynamicland). i have heard he feels most people misunderstand his talk for the superficial wow moments.
Your program doesn't need time travelling debugging if it already works.
Now, it might appear that the tools Bret is working on help you to think about your problem better, and I think that's true at the margin. But they don't seem to help that much since mostly people are lazy and don't want to think hard (myself included a lot of the time). So these tools slightly lower the activation energy and probably somewhat increase the number of people who are able to learn certain concepts, but they don't lower it that much and a motivated person can generally find a good explanation of anything that's not at the research frontier.
It prints the results of executing your code line by line, next to your source code.
When we ask ourselves: "how should we structure our code?", the primary goal should be: so that it's easily visualizable and debuggable. Our guiding principles at the moment are all over the place. For a long time it was: make code testable. Then we had things like: eliminate side-effects, immutability, one-way data flow, static type-checkable, etc.
The problem is that by ignoring visualizability, when you do come to visualize it (which everyone inevitably needs to when reading code and building a mental model), it's this huge tangled mess. People forget that the purpose of structuring code is to make it easy for HUMANS to understand. The computer only understands assembly at the end of the day. So anything beyond that is suppose to be for our benefit.
https://m.youtube.com/watch?v=TqISbaJ7qug
Iterating on this in a modern way remains TODO. As John Henry and his counterpart note in that video, the beauty of those tools was that people who didn’t care about code were able to create interactive experiences.
The whole “everyone learn to code or you’ll be poor” thing of the past 10 years has been a huge and unnecessary distraction.
Bret's long term goal was to reform society and strech goal to fall in love. I'm pleased to say I've achieved one of those.
The analysis is all done passively, as the methods are being called, no breakpoints needed. E.g. a new, onboarding engineer could easily see the most important methods for a given API endpoint.
It's not directly manipulable, but it does give the feeling of "ohhhh, THIS is what my program is doing."
I think Bret would be proud.
In a way, the interactions can be broken up in subject, verb and object. I edit text. I crop a photograph. The subject is the user, and all the representations that extend the user. The mouse pointer in that sense is part of the user. The verb is the action. To select text on screen, I put pressure on a touchpad, move my fingers and release. This is a learned interaction, there are no inherent affordances to a medium (only, at best, inherent affordances to a tool that extends you). The object is a representation. With computer interfaces, you hardly ever interact with something in an immediate way. I want a comment to appear on this site but instead I am writing this text in a white box and not where the comment would appear. All the computer interactions are mediated by these in-between steps. (An example for unmediated interaction would be cooking. What you chop is what you get.)
To get a rich feedback loop in such a mediated environment you need to try and make it as unmediated as possible. To make the subject less mediated, the whole body of the user, the quality of the movements could be integrated into integrations. Here I have hopes for AI supported movement recognition. In addition, the representations of the user (e.g. the mouse pointer in existing systems) could become less binary in their state (I don't have a good example for it, but a hammer can be used with a whole range of intensity while a mouse pointer either performs a click or not).
To make the object less mediated, its representation should ideally be as transparent as possible. AR in this regard is much more promising than VR, since in VR the representation takes place within another representation.
To make the action less mediated, the action should be able to be embodied by the (technologically extended) user, as well as being inherent to the medium which represents the subject. Here we are building a bridge between the human body and a way to represent things that physically do not exist. It's never going to be ideal, but it could be better than what we have now.
So for such a task you'd need to be a UX person, a choreographer, a programmer, a AR/AI person, and ideally have some insights into media theory. It's just not an easy task.
As a side effect, you'll find about people working in similar instalations.
It is very exciting to post the vision. It gets many views and spreads far and wide.
It is much less exciting to hear about the issues the vision encountered. And they do encounter visions.
Humans can only hold so many ideas in their head at the same time, commonly expressed as somewhere around 7 or so. Our programming systems are intrinsically and deeply bent around this, trying to limit the necessary context for any given bit of code to fit within this constraint. We don't even see this because it is the water we swim in.
So when we imagine coding, we can't hardly help but imagine manipulating like 7 things. And 7 things fits on the screen great, and 7 things fits in our minds great, and it makes a great demo. What is much harder is realizing just how often we work with things that are much larger than that, and, intrinsically so.
A good solid example that is larger than that, but still in the realm of things we can understand, is an HTTP request. A few dozen elements, each of which may have a few "things" in it, amounting to one or two hundred things. You've probably seen visual representations of such, with the header names on the left and the values on the right. Already any attempt at making this visual and live is straining, but it can be done.
And then you have a database table with, let's say, 150 columns and several million rows, and the visual metaphor is dead.
We have too many things like the latter. We encounter them all the time. The reason for this is not sloppiness in programming or a failure of vision, but the fact that our 7 +/- 2 values we can hold in our mind is really really small and simply inadequate to address the universe. We encounter things all the time that are very challenging to fit into that box. Any programming metaphor that requires that everything fit into that sized box is a non-starter. A total non-starter.
This is ultimately why "visual programming" in general has failed and will always fail.
If we could as easily hold, say, 50 things in our mind, we would have so many more options. We burn so many of our 7 +/- 2 values just holding references to the place we need to go to go get more values if we need them, e.g., dedicating a slot just to the fact we have a database connection. Further slots needed to handle what we're querying from that database, etc. If we had more registers we could spend a lot less time just managing the registers, even if we would still ultimately hit limits. But this stuff slams into a complexity wall with us humans so, soooooo quickly after the first pretty demos.
This is why you see a steady stream of such demos, which look awesome, and then they go nowhere, because you can't hook it up to a web server, or a non-trivial database, or just about any code you can imagine, really. Games are already an "easy mode" for this demo, most things are not games or graphical displays, and they fall over quickly too because again as soon as it's non-trivial in the game world it doesn't work anymore.
And this makes me sad. But it also makes me sad to see people jousting with the same windmill over and over. So my usual followup here is, if you are interested in trying yourself, I strongly suggest looking at the historical efforts, and at least coming up with a theory as to why whatever it is you are thinking about will do better and will solve the problems I lay out here. Maybe there is a solution. I can't guarantee there isn't. And precisely because I know it is hard, if you succeed, I will be that much more impressed. But I do want you to know it is a hard problem. The common characterization of it as being easy and obvious and my god how could everyone else be so stupid as to miss this is frankly offensive. We are not collectively stupid. We are not collectively idiots. We do what we do for good reasons. If you do not start from that understanding, if you don't understand the good reasons why the current textual paradigm is dominant, you're doomed from the beginning.
It seems like his whole thing is allowing people to access new thoughts that they couldn't before.
I'm not really familiar with that kind of almost psychonautic inspired approach to programming, so I don't really have a good understanding of his vision. But it seems like a few tools have parts of his vision.
FORTH is famous for small programs that almost redefine the language itself and can say a lot in less than a page. Seems very relevant, but I don't know anything about it besides the basic outline and history.
LISP seems to have some of the same characteristics. Maybe a live code environment for FORTH would be of interest?
I'd argue Excel and co are probably the biggest success in live general purpose data manipulation, and probably the only ones out there I actually have any interest in.
Spreadsheets are also an amazing example of something uniquely digital, that's not quite just a paper emulator, and isn't just some crazy impractical experiment.
Spreadsheets are pretty amazing when you think about it.
Really the big thing that makes them special is the idea that you can put an =expression wherever you could put a value, embedded in a normal, consumery app that otherwise works similar to other software.
The other thing that makes it special is that it's highly constrained. You get a 2D grid of cells. Creativity loves constraint and they seem to have the perfect amount.
It seems average users are perfectly capable of cramming their use case into a set of high level primitives. That's different from almost every other "Code for everyone" system that just tries to make low level primitives accessible.
It's easier to use the wrong framework to do something than build the right thing with raw materials.
From that people who are not programmers at all run half the financial system. And it works. The world has not exploded.
It might be almost never the ideal choice, but then again, I don't think any DIY programming is likely the ideal choice when off the shelf special purpose apps exist, I'd never suggest anyone but a megacorp do their own booking and billing app or something.
I can't think of a single other tool that lets people who would never program, and hate programming... write programs. All the other tools are just things an average person could learn. But they won't, because they'll wonder why. And they'll be right, because they probably don't want to spend enough time with it to be able to do anything they couldn't do in a Google Sheet.
Not only that, it's a live environment that's truly practical, it's not just a tool for thinking, it's a tool for a subset of the same stuff Python might do.
It's something I use on occasion, and frequently use =expressions in other contexts.
It might not be truly general purpose, but it sure is impressive.
reminds me of Bush's memex "pathways"
- Jupiter notebooks
- Dev Cards
- Storybook
- Dark Lang
- Excel!
Before you can manipulate anything you have to define a set of affordances. If you have no affordances you have... nothing.
A lot of programming is really about manually creating affordances that can be applied to specific domains. The traditional medium for this is text, with dataflow diagrams a distant second.
People often forget that this is still symbolic programming. You could replace all the keywords in a language with emojis, different photos of Seattle, or hex colour codes, but we use text because it's mnemonic in a way that more abstract representations aren't.
Dataflow diagrams are good as far as they go, but it doesn't take much for a visual representation to become too complex to understand. With text you can at least take it in small chunks, and abstraction/encapsulation make it relatively easy to move between different chunk levels.
At the meta level you can imagine a magical affordance factory that somehow pre-digests data, intuits a comprehensible set of affordances, and then presents them to you. But how would that work without a model of what kinds of affordances humans find comprehensible and useful?
ML etc are the opposite of this. They pre-digest data and provide very crude access through text prompts, but they're like wearing power-gloves that can pick up cars but not small objects. You can't tell ML that a specific detail is wrong without retraining it. The affordances for that just aren't there.
And of course many domains require specialised expert skills. So a workable solution would require a form of AGI clever enough to understand the specific skills of an individual user, so that affordances could be tailored to their level of expertise.
I can't see generalised intuitive domain manipulation being possible until those problems are solved.
It's trivial to explore visually the space of all possible outputs for a program of length 1 instruction. But how about a million? The output is just gonna be useless noise 99.99999%.
What even is a dimension of a computer program? How many divisions it performs? How many gotos there are?