Why don't smartphones do something similar?
There’s some heavy condescension in this thread, claiming that not programming “keeps people stupid”. As if creating can only happen in an IDE and not in ProCreate or iMovie. It would help if programmers like us eased up on everyone else. Not everyone needs to be exactly like us.
Also, you have to be living under a literal rock if you’ve missed the potential impact of LLMs, specifically chatGPT plugins. Once that’s generally available, everyone can tell the LLM what they want in their native language and have it done. Everyone can build custom recipes that combine multiple apps and APIs in novel ways to get stuff done. That’s a revolution waiting to happen - where everyone will be limited only by their imagination, not by their knowledge of programming. And it’ll happen without having to show every smartphone user a BASIC terminal.
Coding requires deep thought, but phones are optimized for moving around rather than sitting in one place and thinking. As such, use cases like maps, calendars, communication and alarms get priority.
The reason people buy a phone is to communicate and get around. And manufacturers cater to those needs.
2. In the 80s seeing your choice of text on a screen was something really new to most people. Teletext would not be universal until the end of the decade. Nothing in the house had even a text display. Calculators were numeric only. These days, everything has a display.
3. There were no polished programs to compete with. A lot of early games were very simple. There was no App Store or Steam. You could write a Tic Tac Toe game never having seen a commercial implementation. Even if it was bad, your friends would be impressed.
If you want inclusive coding, Scratch is the world’s largest coding community for children - https://news.ycombinator.com/item?id=35373052 . The 40 characters wide Basic terminal was of its time.
It came with a full version of linux on it, with a hardware keyboard. Their marketing around it was that it was 'the hackers phone', and they started trying to build a community around it by launching competitions and advertising the best hacks and out of the box ideas which could be achieved on the phone.
I remember running aircrack-ng on it to crack wifi handshakes which could be captuered on the stock phone. Running nmap etc, war driving. It truly was incredible.
As expected though, it died in the water. There was such little uptake in the community that nokia stopped developing the Maemo OS it had made for it, and silently killed the brand. I sometimes like to imagine what could have been achieved if they had pressed on with that product line and the marketing which encouraged breaking the norm and creating something cool on your phone.
I did try programming from scratch and even typed in some of the programs published in BYTE and whatnot. Nothing I ever came up with on my own ever seemed worth the effort and certainly wasn’t how I wanted to use my time. What a pain in the ass. Typing all that crap and debugging was never as interesting as using programs.
The vast majority of people have no interest in messing with programming. Most people have no internet in making anything. You can complain that it’s a symptom of consumer culture or whatever but I think it has always been the case that more people want to use a tool than make a tool. Even back in the 80s computers were bought to use programs primarily. Only a tiny minority were interested in programming and that will always be the case.
You couldn't expect to have most use-cases solved by the available code, so people were willing to code what they needed. 8-bit computer without BASIC was almost useless.
Software was also very crude back then - it was possible for some teen to write a commercially useful piece of code in a few weeks - that code could then help you sell your hardware to other people.
Nowadays we have millions of programmers worldwide writing for only a few possible platforms and the low hanging fruits are long picked. Writing successful software today usually takes millions of man-hours. So hobbyists aren't that important.
Anyway, this is not an excuse for the state of operating systems today, which dissuade people towards programming. Yes, I even mean Unix-like operating systems. Why? Simply because the mental gap between ‘programs as we use them’ and ‘programs as we write them’ is so large on a modern OS. Consumers are stuck essentially playing in the sandpit and don’t really learn anything about computing while they use the computer. Unless of course they have some weird drive that helps them weather the pain of going against the tide and actually learning what’s going on in their computer.
There is very little you can learn from an application (such as Firefox, telegram etc.) because they offer zero introspection and are completely alien things compared to how they are conjured.
Think of the Apple II, there was very little gap between using the ‘OS’, making a program and using a program. They were nearly the same thing, or felt the same.
How would you interact with the computer? Enter some text. How did you make a program? Enter some text and save it. How did you run and interact with the program? …
Alan Kay talks about this a lot. If you imagine Smalltalk (now Squeak) as an OS, the gap between programming and using programs is tiny, so you learn by consuming and thus the leap to proficiency isn’t so big.
Another great counter-example is emacs. It offers introspection info everything and lets you modify a lot of its behaviour in a uniform way. I thing this is why people like it so much. You learn it as you use it and you’re encouraged to modify it. It’s a far cry from how most software is made today.
This is of course changing, as mobile phones gain more functions. However, for it to happen it needs a change in culture. Phones have been largely seen as consumption devices, and still, a lot of people aren't comfortable writing long pieces with them.
It also could be as simple as what priorities the executive class wants to give their devices. I could imagine an alternate world where Steve Wozniak still had influence at Apple and he could push their lineup towards more hackable.
In the nineties computers hot Excel and similar, which took over basic tasks and people built their databases for their hobby or business, there ledgers, ... on top of that.
Today there is special software for most of those things and computers are mass market products, made by companies which want the user's to consume media (via iTunes, or YouTube ads etc.) and approachable for everybody. The product managers fear that just seeing a glimpse of a programmable interface drives people away and reduces the media consumption.
In the 80s and 90s. One could still have a very high level understanding of every single part of the computer. From Hardware to Software. The whole stack.
We now have a whole generation of programmers who dont understand a thing about Hardware. Not even enthusiast level. Nor do they understand anything beyond their domain within Software.
The bar is also a lot higher, when basic graphics and UI luxury and command line is still acceptable. You could quickly make a script to do something you want.
And then there is a whole generation of I would argue needless complexity added on top of complexity. Take Web development for example. CGI or PHP used to be Simple. It wasn't easy, but it was simple. I dont know where to begin to describe in today's web dev.
Its tiny form factor and lack of keyboard input are forbidding.
Even a lower-end unit is a relatively sealed package. I can get a command prompt on my 'droid unit via termux, but it's not going to show much, or easily integrate with the OS, for security reasons.
The kicker is that, from a risk-management perspective, I don't want to jack around with my (non-cheap) 'droid unit and risk bricking or destabilizing it.
- The IDE and language must be preinstalled or available with a one-click installer.
- The programming language must be extremely simple and easy to learn. Almost no currently popular language satisfies this requirement. Even Python is too complicated, requires learning too many libraries.
- Easy input and output, ideally with a GUI, but at least console style.
- Simple way to run a program. Just click or type "run", for example.
- Integration into the target platform. If programs are started by clicking on an icon, the deployment must provide apps with icons, of course.
- Easy deployment, either by source code or by a single file that can be run everywhere with an interpreter.
In addition to this, for modern phones there would need to be an interpreter for running programs on desktops, too, and an online library of extension packages and programs.
Not many languages/implementations/IDEs for phones satisfy these criteria. There are not even many for desktop. How many IDE/language combos do you use that are easy to learn and allow one-click deployment to all major platforms?
The reality is the time for tinkering and stuff is over (for pc workloads, you can tinker all you like with hardware/agri/space/radio/nuclear open-source). Computers, whether they are on your lap or in your pocket, are being controlled by big corporations. You might say "But, Linux isn't controlled by a big corporation" and you'd be wrong. All these big tech companies want to wall you into their garden to shake you down anytime they want more capital.
The best way to get that 80s tinkering feeling again is to go get a Raspberry Pi or something and start building your own thing. Don't expect any of the consumer tech to ever cater to that kind of crowd again. They may pay homage to their roots but their bottom line depends on you forking over cash on their services and app stores.
Both Android and iOS are designed to be locked-down consumer goods owned by the manufacturer and only used by you; they're not tools that encourage you to learn and tweak. You need to look elsewhere for that.
ex: In many respects there is vastly more effort involved in making them general purpose computing devices. Why harm profits catering to people who want that when the crushingly vast majority don't want it, when they can't be trusted to use it (directly or indirectly) without messing something up (harming profit via support costs), and when they aren't able to prevent bad actors from misusing it and harming them (again harming profits)?
ex: Why do game developers skip story? Because 90% of the market doesn't care about it. Why do movie studios crank out films with garbage stories/plot? Because 90% of the market doesn't care about it. As long as people and therefore companies believe that generating revenue is the only thing that matters, this results.
Naturally I learned Java and android as soon as I could. I published my first app soon after.
Well it didn't work so well on all devices. It worked on all my test Devices so I was stuck and the app was removed from the store.
Years later I released a game. It was taken offline several times before I didn't care anymore. Half of the time I didn't know why, I just submitted a new build on the current SDK it got approved and a few weeks later taken offline.
The whole ecosystem sucks. You build really temporary apps that likely won't work on future phones without rebuilding several times for new SDKs. Java is not a language that encourages fun in coding and Android makes the Java experience just worse.
But it can be done to an extent. Grab a cheap external BT keyboard, a folding monitor stand, and Termux or a Scheme interpreter, or a BASIC interpreter (I haven't looked, but I assume they exist), and you're there.
I've done a lot of programming and writing on the phone that way.
I suspect that, even now, including an IDE as a default app with a language compiler or interpreter would be a waste of space for most users. They wouldn't use it, and they'd complain about it taking up valuable photo and music storage.
If kids could have just instantly downloaded any program they wanted, they wouldn't have been copying programs out of the back of magazines.
Personally, I came a little later, but I also learned stuff because I had to in order to do the things I wanted to do. After enough exposure I realized it was fun and interesting.
Nowadays you can go to a page like this ...
https://no-gravity.github.io/html_editor/
... edit the code and see the result in real time.
This is not to say, some people better then others, just every human have some strong sides from possible set, but very rare people strong at all.
For simplicity, there are three main classes - math, arts, psychology (as example, Betazoids in Star Trek are not totally fictitious, exists people who are so strong in sensing others, so it could appear, like they read your thoughts). This is good correlated with number of graduated in Germany - ~35% (I think ~10% math, ~10% art, ~10% psychology). Some people are physically strong.
Programming is mostly math.
So, early 80s computers, gives great tools into hands of math people, and this duet gives lot of fruits.
After computers penetration become much more than 10%, it become impossible for education system, to educate more math people, so for others computer becomes just entertainment.
I believe, with AI teachers, private highly-custom education will become more affordable, so larger percent of math-capable will appear, but I can't predict exact number, may it become twice bigger, the same with others (arts, psychology), so 70% will be with diploma.
The last thing companies want to happen with smart phones is what happened to PCs. People were able to upend many large commercial companies in the 80s and 90s due to the low bar of entry.
Just to name a few: DEC, CDC, Data General, Wang. IBM almost folded due to the PC. So Smart Phones were made closed to prevent the same revolution from repeating itself.
Now the market is so big, we have enough customers to justify a polished, easy to use, featureful app for every task.
And people want security, meaning we need lots and lots of boilerplate to deal with all the permissioned APIs, plus we want reliability, so we can't just let people peek and poke. If we let people do that the app store would fill up with apps that used it, and half of them would be unreliable, and stuff would suck.
In fact, if there was a prompt at all, hackers could trick people into typing evil commands like they do to Linux newbies. Then those of us with tech experience would have to deal with it.
Much better to offer equivalent capability in a way that doesn't have an easy to exploit hole on the human side.
Smartphones DO encourage programming. There's like a million apps. They don't need a basic prompt, they just need to be there and really capable and make people feel its worth learning to develop for them.
There are various sandboxed easy to use dev apps on Android for those who want to do old school 80s BASIC stuff. And it's a pretty cool thing, but it's best kept where it belongs, in an app, not conflicting with the primary use case of the phone.
I do wish Google would support Flutter a little better and make stuff easier so we didn't need buggy third party libs though.
Why aren't AAA games running on text based interactive fiction engines? Because for the target market, that's crap.
Most people do not want to learn programming. I know, it's a horrible thought, but it is what it is.
By the time smartphones shipped, communication with others and availability of vast repos of apps were almost unlimited. Only a tiny fraction of owners ever thought about coding up their own apps or exploring the infrastructure of the device.
And of course the makers of smartphone OSes had no intention of opening up their sealed gardens so users could compete with their near-total control of the user experience — a lesson that Microsoft's domination of the PC's OS taught to all budding monopolists.
But there are other reasons - these are the reasons that people who already like to program don't do so on their phones. Optimistically, improved interfaces (e.g. AI listening to your words and groking your system diagrams) will improve the programmability of phones perhaps even beyond the laptop.
The difference between Apple IIe and the iPhone 12 is scale. 40 years of Moore's law means the phone you hold in your hand has roughly 2^10 more components at 1/100 the volume. Early computers, you could look at them with your eyes, fix (and break) them with your hands. The same was true of the experience of using a computer - it was all very "close to the metal", with small, simple abstractions (like booting into BASIC, or restarting your computer with a new floppy in it - computers were essentially stateless).
It feels like a stroke of luck to have grown up alongside each iteration of PC technology because I can see how it would be overwhelming to try to understand it starting from 0.
Smartphones are consumer devices. And a Raspberry Pi 400 is probably even more niche than 80s computers were in comparison.
In any case I don't think it's about the devices at all, there are just better things to do even for an introvert who doesn't go out much. Just different times.
And the target audience doesn't want to be a programmer or even customize their phone.
The user in the classic desktop design was intended as a powerful tools in hes/shes hands, modern mobile+cloud are designed to milk data and keep people entertained.
A tablet device, on the other hand, is more likely to possess a higher-capacity battery and be usable for content generation / long-form editing.
And nothing stopping you from pushing your app to Google Play so all your friends can download it onto their phones.
Probably the major reason why early 80's computers made basic so prominent is that there wasn't much else to do with them.
They could have a USB port that takes a keyboard, because typing with your thumbs is simian. They could have a video-output port, because the screen is too damn small. They could have an easy-to-use filesystem ... but they don't.
So they're unsuited to purpose, on purpose I'm sure. They're not trying to get to thinkers. If I wanted to program away from a desktop, it'd be something on Linux, something tablet-sized, a USB port for a roll-up keyboard, an analog audio-jack. Maybe with a RasPi. No phone, radio, bluetooth, GPS or spyware, no fat cluster of worthless junkware. I can wait. Until then, a laptop. And if the code can't run on a phone, oh well. They had their chance.
The best part was that the author had included a lovely help manual which briefly showed how to use command line tools, gdb, tmux, vim, irc, git, etc. but it was cryptic enough that I had to experiment a lot. And the environment was pretty constrained - BusyBox, a couple of pre-compiled tools but no package manager or X11.
This was my gateway into "real" programming, as opposed to typing things in an IDE.
But there is nothing phone-specific about this, if I had an easy way to try Linux on my computer, I would have preferred that. Obviously now WSL exists so it kind of fills that niche.
I think the app is no longer available and there are better alternatives, but I still have it on an 8 year old device.
Negative:
1. Highly abstract, hiding away internal functionality. 99% of mobile is GUI. iOS even abstracts away the filesystem. Android is more transparent.
2. Highly locked down: walled gardens increase the friction to run custom code. Terminal as an app that can be installed, but on unrooted devices it's almost useless compared to terminal on a desktop.
3. Ergonomics: smartphone keyboards and screens are not conducive for onboard development. Solutions exist but are pretty niche, and most people would prefer attaching a real keyboard.
Positive:
1. Arguably mobile platforms have increased the number of coders! The app stores facilitate distribution and payment, encouraging new programmers to make an impact. Desktops are in most cases the actual development platform though.
Computers used to be nerds. Now they're for grandmothers. They're now idiot-proof appliances for consumption.
The same reason google and apple try to chain you to the app store... money. If they let you easily write your own apps, then they lose out on all those app store "fees".
But, as others have mentioned in various posts, I think the answer to the question of BASIC REPLs has multiple angles.
First, most early 80s computers did not boot straight to a programming REPL either. As soon as you had some kind of disk operating system you ended up in a shell and would have to invoke a BASIC interpreter or other programming environment if you wanted it. As far as I can tell, the feature threshold was having enough storage to make multiple programs persistently available. That was true across CP/M, MS-DOS, and Apple computers. Then, when they added GUIs, the shell became some kind of GUI menu instead of a CLI. With more storage, you get more built-in apps and less emphasis on a "blank machine" ready to take custom code.
After decades of this growth of storage, a smartphone is really not marketed as a general purpose compute platform. It still has more than vestigial "communications appliance" characteristics and is morphing into "cloud appliance". There is a feedback loop where vendors are marketing an integrated experience that sets the expectation for the next round of products too. And at the mass market volumes they are reaching, economies of scale mean that this approach is targeted towards the largest consumer markets.
It's almost fractal, but the cloud applications themselves are going through the same kind of shift. The commercial pressures are to create ever more integrated experiences for the mass-market user. It is a niche interest to want a general purpose platform where lots of capabilities are available, but the integrated whole is absent and waiting for a new custom program to be entered. Most consumers don't want the device or app that lacks this complete solution, and vendors don't want to provide all the infrastructure and then have some other party come in and claim all the value-add experience that is most visible to the paying customers.
We need something that utilizes touch screens to implement logic.
Something like Scratch but that would let you import libraries. Although I don’t think that’s imaginative enough to surpass traditional keyboard programming.
The problem with visual languages thus far is that they don’t really accomplish anything beyond text languages. I find them to be more difficult to understand than say python or JS.
The future of business logic app programming is going to be describing the details and having some AI implement it.
Meanwhile in the 80s, you had to know some command line just to run a game. If you wanted free games, you had to hand copy the code from a book. Once you learned a little bit, you were either content and kept playing the games or you wanted to learn more about how the game and the computer works.
We don't have that same barrier to entry now. That's good and bad, but that's part of why computers and smartphones are so ubiquitous now.
The smartphone is descended from simple communication devices, and adapted into a richer communication environment and a content consumption device.
You need a computer to do anything hacky with a smartphone, so you're more likely to get into the details on a computer in the first place. Smartphones are mostly useful for consumption of media, an activity that isn't really condusive to tinkering and modifying.
The closest modern equivalent to the early 70s/80s computers as something to learn programming on are things like the Arduino.
Smartphones just don't compare - they are inherently complex, not fully open/documented anyways, and tooling doesn't help. Can't even develop for an iPhone without owning a Mac too.
Cellphones are consumed by the vast majority of people as expensive toys and nothing else.
The non profit hosts a race where the participants build their own cars. My app helps track the race cars. We use GPS, 3g, wifi, a touch screen, and a 12 hr battery to: track cars and send their location live to our website. This simply wasn't possible before tablets - amd it is really cheap with a cheap tablet.
That's a UNIX-ish shell environment.
I wish there was a way to use Android APIs from a scripting environment in Android phone. I tried a bit during last semester, but didn't get anywhere due to lack of time. What I wrote was a beanshell environment which lets you evaluate statements like REPL. But since Android and Java bytecode is different, it can't even pass lambdas properly.
I started with the BASIC prompt in 1981. And I still feel some nostalgia for those times. I don't write "software" per se, but use programming as a problem solving tool. On the PC, when I'm creating stuff rather than consuming, I typically live inside a Jupyter notebook.
I need a keyboard and a screen. We haven't come up with a better coding interface.
The industry wants you to get everything you want through one click, nowadays though, it's not even a click, you just need to talk about it and Alexa and Google Home will pick up on it.
Yes, computers at that time stsrted with a Basic prompt... because there was no appstore, and (at the start) very little in terms of apps. You were supposed to program it yourself (even by tediously copying listings from magazines) and, especially for the early generation products, storage was either very finicky (cassette tapes) or quite expensive (floppy).
In the same years, consoles also started to become a real product, like Atari 2600, Colecovision etc.
Guess what? Consoles did not start up with Basic or Assembly... because you were supposed to use cartridges to play something that had been programmed by a guy working for a company.
So I would argue that personal computers were built and sold (at least at the very start) for people who had a background in electronics and/or an interest in programming. Visicalc changed this almost overnight because after that computers became "interesting" for small business... and this in turn created a market for word processors, small inventory management systems and so on. But also a big push to make "serious" computers (CP/M) that could fit the format/size/price of Apple and TSR-80.
What I am trying to argue here is that PC were at the start mostly intended as educational devices, because there was very little in terms of shrinkwrapped sw to sustain other business cases. And if you just wanted something for your children to play with, you would buy an Atari at a fraction of the price of a Commodore PET or Apple.
Smartphones were always sold as "communication device first" and also the business infrastructure to almost immediately create a large portfolio of apps was already in place (if you remember the original idea for iPhones was not to develop for iOS but just create webapps).
TL;DR: If you bought an Apple at the end of the 70s you absolutely needed a Basic interpreter or it would have been just a very expensive paperweight. If you bought an iPhone in 2007 you wouldn't need to write your own software to get any use out of it.
Also imagine how much better the world would be if you turned on a smartphone and all you got was a BASIC prompt and had to program the rest of the system yourself.
If you wanna make youtube videos, I’d take an iPhone over a BASIC prompt any day.