There is definitely research being done on sparse data sets. Early stats methods were applied to what we would consider small data. Tukey did a lot of work on data viz and exploratory data analysis that was important and applies to small data sets. Many medical experiments use small data sets. Bayesian methods can apply to small data sets.
10 million rows of data is still pretty big, all the same. You can get away with invoking the Central Limit Theorem after about 30 observations, for instance (with all the usual assumptions and caveats). Sometimes all you're getting for the extra effort is a tighter confidence interval around something that could be pretty well estimated with a couple of hundred rows of data.
Thinking this through, the main pain point I've experienced is convincing people to act on the data, follow through, and connect changes in product (however defined) into changes in the metrics.
Maybe a simple model with a well-chosen prior informed by domain knowledge does the job.