1. async function will be called from no async function but no async/await keyword. If you want to block main thread then block_main() function will be used. block_main() /* operations */ unblock_main()
2. protocol can inherit another protocol(s) & protocol can confirm a class like swift.
3. no `let`. only `var`. compiler can optimize further.
4. if (a == (10 || 20 || 30) || b == a) && c { }
5. `Asterisk` is replaced to `x` operator for mul operations.
What are the features you found or you need in a programming language?
Stop caring about syntax and start caring about semantics. For example
> if (a == (10 || 20 || 30) || b == a) && c { }
I get what you're after, but don't optimize syntax for bad code. `a in [10, 20, 30, b] and c` makes a lot more sense to me conceptually than trying to parse nested binary operators, especially when you consider the impact on type checking those expressions.
> async function will be called from no async function but no async/await keyword. If you want to block main thread then block_main() function will be used. block_main() /* operations */ unblock_main()
Think very carefully about the concurrency model of your language before thinking about how the programmer is going to express it. In particular, this model leaves a footgun to leave a program in invalid state.
> What are the features you found or you need in a programming language?
The biggest misdesigns I see in languages are a result of PL designers pushing off uninteresting but critical work so they can explore interesting language semantics. Like the compile/link/load model and module resolution semantics have far more impact over the success and usability of any new language (because they empower package management, code reuse, etc) and the details are hairy enough that taking shortcuts or pushing it off until later will kneecap your design.
This is not an "additional feature" so much as removing a feature. One reason a programmer might declare something to be a constant is to prevent the symbol from being rebound. You're right that one consequence of using `let` instead of `var` is that the compiler can more aggressively optimize constants, but the primary consequence is to capture and enforce the programmer's intent that the binding be immutable.
Observables/reactive programming are pretty important to me on FE too, especially if you have async threads all over the place (looking at you web tech).
- Pattern matching (arguably a more flexible and general way to do the type of thing you have in your number 4 if syntax)
For example a web server needs authcn, authzn, logging, metrics, db retrys, and all the usual gubbins. A DSL for this would rock.
Elm is an example of a language that went down the DSL road and while arguable make some rough choices it proved that a DSL can work really nicely. One side effect is dep management was perfect!
I want types, like typescript, but instead of compilation, there should be a "boot type check" which does about the same checks as tsc, but the types should be first-class, available at runtime, so I can pass them around, create, read, update and delete them at runtime.
I want runtime code creation that is more robust than creating source-code as strings and then compiling them, I want first-class code-as-data like in LISP. I want to be able to define some blocks of code, and conditionally combine them the new blocks which I can compile to functions. I want to be able to derive functions that is more, less or different from their parents (for example, removing or replacing a number of its statements) (basically, I want to have enough control that I can chose a pattern of generating ideal callbacks with no conditionals)
I want to be able to express (I use % for lack of a better idea right now, this is the type-safe version of the syntax))
const someBlock = (a: number, b:string)=%{ console.log(a+2+c); }
And pass it to a function: myFunc(1, someBlock, 3);
(and someblock should be able to use it: function someFunc( aBlock: % ) { const a = 1; const c = 3; someblock; }
I want better introspection, if I pass a function, there's no reasonable, robust and performant way to reason about that function.. You can't access its parameter list to learn the names of its parameters, their type or number, you can't access its body to learn what it does, you can't even see its return type, to determine what to do with its result, mind you, most of this metadata is already present in the js runtime, just not exposed in a good way to the language.. You can't even access the ast.
Better to have some guiding principles or philosophy, and arrive at C, Lisp, APL, Lua, SML or Haskell.
I once saw this in a language called metamine and the demo was amazing, mixed imperative and declarative programming interweaved.
It'd have a small number of reserved keywords. It'd read top-to-bottom without hidden control flows. Editor & toolings won't be necessary but will add progressive enhancements to development experience when available. It'd be easy to tokenize and parse. No syntax sugars.
The compiler will enforce Hungarian notation for all variable names. `iSize` will be an integer the same way `strName` is a string. The type information is never hidden away. Declare constants the same way, but with SCREAMING names. The const-ness is never hidden away.
Functions are of higher-order but first-class. Declare functions the same way, but with fn suffix. Functions don't necessarily need to be pure, but pure ones will be optimized. Async-await is automatic and the compiler will just optimizes it away when you don't need to. No function colorings.
The type system must be flexible enough for me to be able to design complex algebraic data types and sum types. Pattern matching is a must have in that case. I have no idea how the system would be like. But I want it invisible enough that it doesn't get in my way about 75% of the time. I want to make a software, not write poetic programs.
No meta programming. I do not like the way it is done currently in almost all programming languages. But I do understand it's much needed. A new way of doing it must be figured out.
Everything should be explicit. Needs to allocate? Get an allocator. Have an exception? Return it. Set boundaries where compiler can step in to perform optimizations.
Backward compatibility of the language design and the compiler shouldn't get in the way of the language design's potential. The language design and the compiler will get better with time, but people won't rewrite their programs every year. The standard to be maintained is that programmers should be able to incrementally migrate to newer language versions one dependency at a time, one file at a time, heck even one function at a time. Most languages fail this. See JS with CJS/ESM mess for example.
Oh, also file paths should always be POSIX-y. No `use a::b::c;` for imports. No aliased `@a/b/c` paths. Just plain, absolute or relative file paths that I could follow simply in any text editor without language specific toolings.
Because of the similarity to swift language, perhaps hazel[3] option to highlight differences between standard swift & swift-like language.
-----
[1] Pygments, generic syntax highligher : https://news.ycombinator.com/item?id=41324901
[2] Treesitter : https://news.ycombinator.com/item?id=39408195 / https://github.com/mingodad/plgh
[3] : Hazel: A live functional programming environment featuring typed holes : https://news.ycombinator.com/item?id=42004133 ~
981 unique mnemonics
3684 instruction variants
Since the architecture and instruction set has been evolving for decades, I often wonder whether the compiler is generating code for some lowly common denominator. If a sufficiently smart compiler were to compile code for the developer's most current CPU, the code will need to be recompiled for lesser systems.ARM architectures are getting instruction set bloat and RISC-V is also tending that way with many variants being produced.
I prefer minimal syntax, e.g. Lisp, Smalltalk, Self. Then let the abstractions be plugged in with appropriate compiler support. I find the idea of blessed built-in types constrain implementation designs due to their prevalence.
A linter what the reasoning for all rules is well explained, similar to shellcheck or eslint.
Both ideally integrated in an LSP, which also has all the common features of a modern LSP.
My pet peeve is multiplication: u64*u64=u128. true on any modern hw, true in math, false in any programming lang that I know of. There are many others like unnecessary assumptions on how easy is to access memory.
Vector and matrix ix should be another first class citizen.
The reason for a lang vs just writing in asm are 1) I don’t want to distinguish x86 vs arm, 2) I want a compiler + optimizer.
* Defining multidimensional indexing/enumeration for instance of a class that isn't a sequential array, i.e: citizens['elbonia'][1234] mapped to indexer(country, id), i.e. loops: for (citizen of citizens['elbonia']) mapped to enumerator(country).
* Full observability, you can listen to any action like assignment/reading, instancing/destruction, calling/returning.
- Haskell algebraic data types + syntactic sugar
- C/Lisp macros
People want to be able to write either python or javascript (i.e the 2 most widely used languages) , and have a compiler with an language model (doesn't have to be large) on the back end that spits out the optimal assembly code, or IR code for LLVM.
Its already possible to do this with the LLMs direct from the source code, (although converting to C usually yields better results than direct to assembly) but these models are overkill and slow for real compilation work. The actual compiler just need to have a specifically trained model that reads in bytecode (or output of the lexer) and does the conversion, which should be much smaller in size due to having a way smaller token space.
Not only do you get super easy adoption with not having to learn a new language, you also get the advantage of all the libraries in pypi/npm that exist that can be easily converted to optimal native code.
If you manage to get this working, and make it modular, the widespread use of it will inevitably result in community copying this for other languages. Then you can just write in any language you want, and have it all be fast in the end.
And, with transfer learning, the compiler will only get better. For example, it will start to recognize things like parallel processing stuff that it can offload to the GPU or use AVX instructions. It can also automatically make things memory safe without the user having to manually specify it.
a == (10 || 20 || 30)
really better than a in (10, 20, 30)
It seems the first is just ambiguous, and longer.
* procedures as first class citizens
* lexical scope
* strongly typed
* single character syntax and operators
* inheritance and poly instantiation as a feature of language configuration but remove from language instantiation
* event orientation via callbacks. many developers don’t like callbacks but they provide the most flexible and clearly identifiable flow control path
* single string format with interpolation
Make it easy to use multiple cores without forcing the user to think about it constantly.
Typescript is the best syntax-wise language currently that fulfills that on dynamic side.
Swift has also a decent DevX and close to that on a "static" side of universe.
Sorry to disappoint but there are much more important problems begging for solution in infra, package management, transpilation and other domains in all popular languages today.
All syntax sugar and features are already good enough for your next Goodle or facebook.