what tools do you use or know about, e.g. something like Talend DataPreparation (https://www.talend.com/products/data-preparation) ?
I found OpenRefine (http://openrefine.org), which is browser based like Talend DataPreparation.
It would be nice to have a "true" desktop application :)
Anyways, I think Alteryx does this job really well. It's great for ETL that the average Joe can pick up and learn very quickly. Sure, the UX could be improved, but it's miles better than AWS' Data Pipeline and other tools. It is a bit costly like another user mentioned, but well worth it IMO. Tableau has introduced some new products the past year or two to compete in this space as well, so if you use Tableau for reporting, look into their new offerings. Dell introduced an ETL tool a few years ago called Boomi. It had some promise a few years ago and might be something to consider. I have no idea what it costs though. Another option is SSIS if the data is eventually going to live in SQL Server or some Microsoft database.
Ultimately, I would consider options based on your target database/environment.
It allows you to transform and join data from different data sources (including databases like mysql, postgres, redis, and CSV or JSON files, more to come) using a familiar interface - plain SQL.
Other than that, I like Exploratory[2].
And for nested JSON data I use another project of mine, jql[3] - an easier to use (in my opinion) lispy alternative to jq. (originally inspired by a hackernews comment actually)
The data preparation Wikipedia page mentions Talend by name along with Paxata, Trifacta, Alteryx, and Ataccama.
I do a lot of exploratory coding in ipython, though the threshold for "switch to a real editor and run git init and poetry new" is pretty low.
Want to munge CSV? stdlib, or pandas(https://pandas.pydata.org/pandas-docs/stable/)
Want to munge JSON? stdlib.
Want to munge a database? pandas, stdlib, anything that speaks sqlalchemy
Want to validate your json/sql/CSV/whatever and have it come in as a bunch of structured classes not std types? jsonschema https://pypi.org/project/jsonschema/, attrs+marshmallow, attrs+cattrs. http://www.attrs.org/en/stable/, https://desert.readthedocs.io/en/latest/ https://marshmallow.readthedocs.io/en/stable/ https://github.com/Tinche/cattrs
Want to transform csv into sqlite? https://github.com/simonw/csvs-to-sqlite
Want to serve sqlite as a REST api? https://datasette.readthedocs.io/en/stable/
Want to stuff simple things into a database really fast? https://dataset.readthedocs.io/en/latest/
Want to flip numerical data around? Numpy.https://docs.scipy.org/doc/numpy/reference/
Want to model it? Scipy https://www.scipy.org/docs.html, pandas
Want to plot it? seaborn https://seaborn.pydata.org/, plotnine https://plotnine.readthedocs.io/en/stable/
Want to futz around with structured data in a sensible way? glom https://glom.readthedocs.io/en/latest/, python-lenses https://github.com/ingolemo/python-lenses.
Want to spit out tabular data? Tabulate https://github.com/astanin/python-tabulate
Want to figure out where not to eat in chicago? built-ins: counter, defaultdict, comprehensions: https://www.youtube.com/watch?v=lyDLAutA88s
There's a LOT you can do pretty fast, and I can more or less hammer out a basic ETL cli script in my sleep at this point.
At the moment I'm working on improving performance. I can already load 1 million rows into it without too much trouble, and the next step is to load some Kaggle-size CSV's (e.g. 5.6 GB) and then be able to run data cleaning or other transforms on it without it choking.
If anyone's interested in this kind of stuff feel free to drop me an email (in my profile)!
Shell scripts using mostly sed and awk go a very, very long way when I want something repeatable or I'm dealing with a lot of data. And, when that starts getting heavy or long and I need more structure, python.
GUIs just get in the way in this space. By necessity, they need to be opinionated because there's only so much screen to get around in. They also tend to create artifacts that aren't quick to understand later and are locked into some particular product or way of doing things.
It is a GUI based solution with an emphasis on ease of use, aimed at people who aren't coders or data science professionals. For example a marketer who has several different Excel and CSV files they need to merge, clean, filter and restructure to create a report. Any feedback would be very welcome.
Downloading everything locally isn't desirable for me.
What use cases do you have that go beyond Excel?
I say this as someone that develops pipelines in notebooks and workflow frameworks.
Snowflake.
Tools: Avora, FiveTran, Informatica, Excel, Alteryx, Datameer, Tableau Prep, etc. ... list could go on and on.
It’s exciting to see new tooling coming out, depending on your need, watching a video or two about how Monarch runs could be helpful in assessing what tool is for you.
Munging and Merging pretty large datasets using data.table is very efficient and intuitive after a bit of practice.
The vignette in R is very good. Datacamp has a course (I think little dated). Also a search for videos by Matt Dowle and Arun Srinivasan can give a very good quick idea of data.table's capabilities. I think both are involved in some capacity with h2o.ai which seems to be pretty good and on my to-do list.
Desktop coding using Rstudio is a wonderful experience with the R version.
Technical and has a fee: dbt, prefect (while prefect is free, prefect cloud is not)
Non-technical and has a fee: datacoral.ai, ascend.io
You can solve your problem with any of these solutions, and many others not mentioned here.