General Brainstorming Question around data process
We're a small team (5) that's been using the following the general process outlined below
data sources -> dataflows -> semantic models -> PBI reports/dashboards
We have read access to data sources (sql dbs/sharepoints/excel files etc) but we need to create transformations on these native 'tables' so we use dataflows. Then pull those clean/transformed 'tables' into our semantic models and create our star schema/measures/RLS etc. We then publish that model, use it as our 'golden dataset/model' and build all our reports off of said model.
The question...
We don't have an actual db that we can read and write into, hence the process above, but now with Fabric turning on Lakehouse as an option, would you change the process above in any way to include lakehouses, or just continue with the same process?
Context around the transformations and storage, we do basic transformations that on avg refresh in under 10mins, data size is under 1million records for our one fact table, and we use IRs on that fact table to reduce refresh times.