I'd expect the software engineering team to manage these apps, so the benefit I can see is the quick turnaround on the UI, rather than handing over to non-engineering users for development.
What are people's experiences with these tools? More specifically:
Can you do an effective SDLC with them? e.g. code on a staging environment, push to version control, promote to prod, rollback to old version
Have you actually saved time with them?
Did you avoid introducing unworkable complexity?
In the 1980s there was a major effort to make programming intuitive through visual-based programming tools. There was even a journal dedicated to visual programming. I happened to read through some of these articles in 2002-03 for a college project.
The conclusion in the last journal was that programming requires concepts that can only be expressed in words. Visual (intuitive) programming eventually failed because it wasn't expressive enough.
At the same time, Excel took off. Although it doesn't have "variables," it does allow for expression of abstract concepts.
> I'd expect the software engineering team to manage these apps, so the benefit I can see is the quick turnaround on the UI, rather than handing over to non-engineering users for development.
IMO, make sure the team that's going to manage these apps is involved in choosing the toolchain and has complete buy-in. (Or at least has a healthy skepticism.) There's nothing worse than an edict coming from down on high to use a bad tool.
Also, keep in mind that, if you're using well-educated software engineers; lightweight scripts in languages like Python, Javascript (via NodeJS), LINQPad (lightweight C#), ect can go very far. Because you're working with mature languages, you can minimize your technology risk. When I was at Intel, we had a rather robust batching system for VBScript. It could call into custom COM objects, usually written in C++ or C#, when needed.
It has been successful, but I recently had to patch it to remove a new annoying «made with appsmith» ad on our applications that can only be disabled on the enterprise edition or if you edit the source code of the open source one and are willing to share your patch of course. Another important planned feature, SAML/OpenID has also recently been announced to be only for the not open source enterprise edition. Their Kubernetes helm chart is a mess too. They want to be noob friendly with everything in a single stateful container. I understand the reasons, but I don’t like it.
On the good parts, we build applications in a few days, and it’s great for prototyping and simple applications. One large application we build has reached the limits of no-code platforms, in our opinion. It will be rewritten in code, but we think it was not a waste of time and resources as we could iterate fast, and we didn’t know whether a more expensive development would be worth it.
The primary reason for creating Appsmith was because, as a backend engineer, I didn't enjoy mucking around with HTML/CSS just to build admin panels. Hence, we created a way to build such apps really quickly.
Appsmith also has a deep integration with Git. This allows you to create feature branches, raise pull requests and have different branches for your staging and prod environments. This also allows you to rollback your application to an older version.
While there are obvious pitfalls of using a low code application, I'd wager that you'd be able to go much further, much more quickly than you initially estimated.
But the real benefit we gained is the overall support time for issues during on-call. Without any additional tooling such as Datadog or other expensive products, we are able to manage support really easily with the UI that comes with it. So yes - we reduced complexity, save a lot of time, and are still able to code everything we must and this platform connects everything really well.
I'm generally a fan of reducing complexity for building internal tools. We used to use Retool with a combination of Google's App Engine, Google Cloud Scheduler, and Mode Analytics to support our internal teams but we consolidated all of it under Superblocks.
They have charts, you can build backend functions (and call them on a schedule!), and you can build a UI just like you can with Retool (they have feature parity in most areas with Retool on the UI side).
The best part has been that they support Javascript and Python at the same time, which has let our data and engineering teams live in the same eco system and work together to build internal data tools for our team members.
Huge fan!
Low code can deliver some features 4x faster but sometimes a stupid feature can take 20x the time just because the low code platform wont support that. Or sometimes the solution is so ugly that the whole project becomes a mess.
If you want to deliver top quality stuff you are constantly fighting the low code limitations.
However, if you are OK with low code limitations it can be benefical.
It's super useful for developing for things like game jams, where you have a super tight time limit, but you definitely have to work within the constraints of the system. If you try to push the boundary too much on what's "expected" it becomes painful very quickly. Additionally, porting the code to other platforms is difficult since you're at the whim of Clickteam to implement it, and historically their "exporters" have been buggy, leading to a cottage industry springing up around the tool to create alternative runtimes for better performance/portability. [1]
From what I'm reading from the other comments, this experience isn't just unique to games, and seems like it applies to other low-code business application tools as well.
[0] https://media.indiedb.com/images/engines/1/1/19/mmf_eventedi...
The business, however, loves loves loves Smartsheet. Smartsheet looks like excel; but we had configured it to power all kinds of business processes/forms/automation/websites. It is pricey though.
Regarding your questions: SLDC is hit or miss. My attitude is to get the business involved early in these low code tools as they are ultimate judge of "correctness".
I don't think we save time. If you have a 100x developer, they can create all of these things much faster than these low code tools. The problem is we don't have 100x developers. If we did, they will probably leave for better jobs. What we do gain is the ability to create things by our junior developers and admins.
The tools are only good as the vendor support. We want the junior developers and admins to reach out to vendor support for issues and training. We don't want the junior developers and admins to become "project managers" and nag the senior developers.
I spent a few years building data analytic systems for big telcos to reconcile switch and billing data. For this I used the tool my company was selling which was a visual (ugly but very practical) graphical node-based system. You pick the right type of node for a given (stream processing) need, make some configurations (including possibly adjusting the number of inputs and outputs), and then connect things together.
Many of the end results would have been reasonably simple to build in code by an experienced developer, but one of the key benefits of this toolkit was the ability to walk a client through the system, explaining what was going on. The visual nodes and connections were reasonably accessible to non programmers.
An added benefit of this kind of toolkit was the ease of discovering what the data actually looked like and seeing the effects at the output of each node. The outputs also had little number tags which provided surprisingly useful high level information: "How many records did we just cull from the source data with this filter?"
The challenge we encountered was in productionizing these systems. Sometimes they were just tools used to solve a one-time problem, but other times they were intended to become parts of a greater process. (In the latter case, I always felt we should just take the lessons learned and write the final product in code...)
SDLC is a frequent pain point. Often non-existent or hacky, e.g. export this giant XML doc and you can re-import it if needed. If any step requires clicking on buttons it generally leads to that kinda pain. Almost fundamentally, if I need to go copy paste or click on something for the product to work, I won't be able to automate it. Hence, it needs to be completely automatable. Similarly, if the config / set up / customization one does is obfuscated after it is specified it can be a headache to try to work around. Also maybe not the end of the world but many also decide to "handle" version control on your behalf, which can maybe be fine, but will also then almost by definition be divorced for your usual approach and processes.
On the last question, I think this boils down to how well can one operate at the surface level. It is not uncommon in these low code apps to have to understand in detail what is happening below the hood. So yeah, the top layer abstraction might be very simple in practice, but if working with it frequently necessitates me knowing how that abstraction will be executing it leads to an extra layer of complexity that has not abstracted anything. In a way they often function like a second language going through a translator: if I have to be considering the Spanish version the no-code cares about and also how and what it will be translated to in English under the hood, it's just extra confusing for perhaps little gain. What makes it extra pernicious is you can be operating entirely off intuition for what that under the hood English really is, rather than some open spec you can at least drill into.
Worst part here is I feel like this is really often something that will come with using the product. Often the use cases documented and sold really seem like all you'll need.
It seems to be that low code is ideal for prototyping. You have more building blocks. The actual cost is that some of those building blocks may go obsolete and need maintenance. And some will not be suited for your use case, so you have to customize it.
It flips around the dynamic of 90% development/10% maintenance to 10% development/90% maintenance.
Excel is probably the holy grail of low code. There's a whole lot of power in it, so much that it gets dismissed as a layman's tool. Even some of the stuff people are excited about GPT-3 doing, Excel already does.
Implementing the features you mentioned:
Once you connect your SQL / NoSQL database, you're immediately able to view, query, search, and edit the data. Every record implicitly has a "homepage". On this page, you can add active elements to automate tasks via REST services. We do this using JSONata (jsonata.org) which is a super expressive and concise transformation and query language. You can find some samples in our demo application (https://github.com/dashjoin/dashjoin-demo).
The core philosophy is to get immediate results and refine from there, depending on your cost / benefit calculation.
SDLC:
I think this is a very important point! Even though you're working with low code, it is still a development project with testing, staging, and production, issues, tickets, etc. We allow you to manage the software development lifecycle using GIT. You can view a sample project here: https://github.com/dashjoin/dashjoin-demo. It contains of the DB connectivity information, custom page layouts, queries, and REST function adapters. A developer commits changes to the repository. The production can be configured to pull a certain QAed version.
I have used DronaHQ to integrate and create apps on top of databases like MongoDB, MySQL, Postgres and a handful more and the power really shows in the platform's performance when it comes to looking up over 10000 rows of data (through smart optimization techniques ofcourse supported by the tool) which I am yet to find elsewhere.
I am definitely saving a lot of time and able to create some fairly complex tools such as a custom CRM, dynamic inspection forms, approval management apps.
I have used DronaHQ to integrate and create apps on top of databases like MongoDB, MySQL, Postgres and a handful more and the power really shows in the platform's performance when it comes to looking up over 10000 rows of data (through smart optimization techniques ofcourse supported by the tool) which I am yet to find elsewhere.
I am definitely saving a lot of time and able to create some fairly complex tools such as a custom CRM, dynamic inspection forms, approval management apps.
Most of the systems either don't have the big benefits they claim to have (time spent just flows elsewhere), or they are of limited use (think basic form workflows into a database or spreadsheet).
It's somewhat similar to a 'website builder' where yes, you can build a website, but instead of learning general tools, you're learning builder-specific tools. And instead of solving logic in logic solving tools, you're solving logic in (mostly) sub-optimal tools.
This is essentially the same problem with say Excel and Access; to use those at scale you'll still need to learn a 'variant' of programming and data modelling and it doesn't help much that it's inside of an application instead of inside of an IDE.
In this specific case, I would ask the software engineering team to find out ways to do rapid prototyping with a trade-off in UI quality; you can generally get away with the same principles as OpenAPIV3 spec based data structure and interface generators. You focus on specifying how the application should process and model data, and auto-generate all the required persistence and interface stuff from that.
Our app would’ve needed a bunch of new apis added to it to make it workable that we just weren’t willing to invest in at the time so we’re sticking with our plain html pages and smatteringgs of js for now.
In terms of productionizing it, you can pick any postgres migration system and use that to maintain the schema,and use directus only for the crud/adminui layer.
Can recommend a try.. the hosted free version even before you decide to self host.
You can get a responsive interface ready to go pretty fast and it does scale if you know how to design DB schemas, you can also go pretty wild with the custom APIs to integrate more traditional functions.
By default you've always got a staging and prod environment.
Higher tier plan do enable rollback to previous deployments [Up to a year I believe].
If you're looking to test out an idea or MVP then it's a great tool to get it in users hands, and it does scale.
Avoiding unworkable complexity really comes down to the developer / team you choose and their ability to develop the project with a structured approach.
The problem is that afaik nobody has yet implemented the grand idea well. They build the facade and the insides are rotten, haphazardly put together without a clear vision. It needs a deeply technical vision and, I imagine, a bit of computer science to get right.
It's a bit like what the cloud space was before AWS and Google came along.
The issue I have with many of these products is that a lot of the logic should actually sit closer to the backend (ie. better REST APIs, etc.) If your engineering team is managing the backend anyways, it shouldn't be a huge issue maybe?
I made an investment here (moving the non-UI logic closer to the database) but it's too early to really see if it will work.
It worked well and saved a ton of time by almost eliminating UI elevations. Most workflow changes were very fast too due to the simplicity of the FileNet GUI. Initial setup takes time, and there aren't a lot of FileNet resources out there. For those reasons I'd hesitate to recommend it.
Wow, I'm impressed by the number of solutions out there. Back at the beginning of Forest Admin, we were alone on the market, which is generally not a good sign. But our perseverance paid off, and it was definitely worth it in the end!
Alright, so why Forest Admin? :)
Because we only focus on the admin panel use case. Not the entire internal tools world. In this way, we are able to provide a fully-featured SaaS Admin panel out of the box. No need to build it, nor with code, nor with low/no code tools.
Even if you "are too specific", we have designed our solution accordingly with 2 things that are part of our DNA from the beginning:
1/ We generate all the backend code required to an admin panel. All CRUD routes, filtering & search, dashboarding, permissions, etc. Everything is automatically generated in a few seconds based on datasource introspection. In the end, the generated code is just a standard REST API, so you can extend/override it without any limitations.
2/ We pre-built the admin UI with every admin standard features available out of the box, with a big focus on providing a great UI/UX possible for operational people. We obviously also provide all the low/no code features to customize pretty much anything. We also provide a feature called "Workspace" (which is generally the core of what our competitors do) that allow users build custom views using drag'n'drop of UI components from scratch.
About SDLC - it's provided out of the box at Forest Admin:
- For backend code - you can rely on Git.
- For UI code - we provide a feature called "Development workflow" (https://docs.forestadmin.com/documentation/getting-started/d...) that allows you to have different environments (e.g. dev, staging, prod, fork branches, merge them, etc.
In this way, even companies with hundreds of onboarded users (including douzen of devs) can use us in a scalable way.
https://www.onedb.online/blog/why_no_code_is_better_than_ful...
Obviously I'm biased, since I'm founder of onedb a No-Code/ERP/CRM Platform. I've tried to be objective with the article. I am sure though that a Domain expert using Onedb can deliver what you need 100* faster than with traditional software tools.
- to be a database admin interface for non-engineering users to display and edit a bunch of tables in postgres.
- having an interface around calling internal API's, with a page for input parameters
it's pretty good for these use cases, it allows non-engineering people to do some things that would otherwise require a custom built UI.
For reference:
What do you want to use your low-code system to build?
If you're targeting arbitrary app creation, then you probably want your aforementioned tools or the myriad of other options (e.g. Appian, etc).
But if you're targeting automation of existing processes / apps, on Windows, and would like some-but-not-unlimited UI flexibility... use an RPA tool. E.g. UiPath
You get out of the box GUI integration for driving other apps + all the other app designer bits.
Which turns any automation problem into (1) "Is there a prebuilt component for it?", (2) "If not, can I interact directly with the necessary GUI controls?", (3) "If not, can I use CV to drive the problem GUI sections?"
In ~10 years of doing this, over a few products, I'd say (1) handles 60% of use cases, (2) handles 35%, (3) handles 4.99%, and I can only think of a couple times I've fallen completely through and been unable to automate.
And which, critically, means you can push the tool to non-traditional programmers and get decent results.
---
> Can you do an effective SDLC with them? E.g. code on a staging environment, push to version control, promote to prod, rollback to old version
Go ahead and ignore any tool that doesn't store apps/automations in readable files (typically XML).
From that subset, they have varying CVS support, but most modern ones support a sane SDLC.
I'd also weed out anything that doesn't support command line / Jenkins builds.
> Have you actually saved time with them?
Hell yeah. I've automated health care EMR and claims systems in a few weeks... that I'd still be trying to get legal access to an SDK if I'd gone that route.
The key feature of RPA is WYSIWYG automation: since you're automating your GUI, there's no impedance between {process as performed by user} and {process as performed by code}. You just ask them what they do, then 1:1 map actions to automation.
> Did you avoid introducing unworkable complexity?
The key here is process documentation. Specifically, writing down the why of a process and its steps. What is more typically noted, but is far less informative.
With the why documented, you can port to an alternate system or adjust the process down the road without issue.
Setting up hard, machine-checked "must X to Y" gates helps a lot with ensuring consistency here. E.g. carrot and stick "you must create a doc in this format, have it reviewed, and have your code in version control and reviewed" before "you get the ability to schedule it in production"
If you want lessons-learned from my adventures (and misadventures) of creating a low-code offering at a large enterprise company, feel free to reach out. Email in profile.
FWIW, I've done some explorations with low-code.
Written format:
Using Hasura with Low-code: https://hasura.io/blog/hasura-for-the-low-code-ecosystem/
Retool: https://retool.com/blog/event-driven-architecture-and-reacti...
Microsoft Power-automate: https://hasura.io/blog/integrating-hasura-with-microsoft-pow...
Retool (again): https://hasura.io/blog/integrating-hasura-with-retool-for-bu...
Bubble: https://hasura.io/blog/tutorial-using-hasura-api-with-bubble...
Draftbit (Again): https://hasura.io/blog/a-tutorial-using-hasura-with-draftbit...
Rapid prototyping listicle: https://hasura.io/blog/rapid-prototyping-with-hasura/
Videos:
Draftbit: https://www.youtube.com/watch?v=WrhQKt5-QY8&list=PLTRTpHrUcS...
Appsmith: https://www.youtube.com/watch?v=e4swzfSAevo&list=PLTRTpHrUcS...
Retool: https://www.youtube.com/watch?v=qhFs431UwIw&list=PLTRTpHrUcS...
I do think there's lots of power to low-code and I think for internal tooling specifically or for testing theories or even short-lived public-facing tools, they fit a solid use-case of turning smart people int actionable devs. They still need to "think like a programmer", which essentially makes UI a language, albeit one that's more verbose.
But with a generation of kids learning Scratch and other tools like that soon to hit the job market, maybe these tools are primed for major adoption?
Today I still ask businesses that ask me to help with tools why this CAN'T be a low-code solution, similar to the old adage, "why shouldn't this be a spreadsheet."
Full disclosure, I work for Hasura so there's some heavy handed surfacing of Hasura in those links, but it just goes to show what's possible when you decouple front-end from the API layer, letting different tools solve their scoped problem space.
Edit for formatting.