Building a Data Warehouse Dimensional Model using Azure Synapse Analytics SQL Serverless
Ғылым және технология
The Serverless SQL Pools service within Azure Synapse Analytics allows querying CSV, JSON and Parquet data in Azure Storage, Data Lake Gen1/2 and Cosmos DB. With this functionality we are able to create a Logical Data Warehouse over data stored in these systems without moving and loading the data. However, the source data may not be in the best possible format for analytical workloads...
In this session we'll be looking at using Azure Synapse Analytics SQL Serverless Pools to create a Data Warehouse using the Dimensional Modelling technique to create a set of Dimensions and Facts and store this data in a more appropriate structure and file format.
All data will be stored in an Azure Data Lake Gen2 account with processing and serving performed by the SQL Serverless Pools engine.
Пікірлер: 25
13min in, already obvious this is a fantastic video. Thanks for doing this!
@CloudLunchLearn
Жыл бұрын
Thank you @BaGua79. We are happy you enjoyed it. Please help us promote it, by sharing it with your colleagues and friends using your social accounts. Have a great day.
very information session sir
@CloudLunchLearn
Жыл бұрын
Thank you @Arsalan. We are happy you enjoyed it. Please help us promote it, by sharing it with your colleagues and friends using your social accounts. Have a great day.
i enjoyed watching this video but please get us 1080p or 4K going forward. THANKS
Great video. thanks for sharing
@CloudLunchLearn
Жыл бұрын
Thank you @Ken. We are happy you enjoyed it. Please help us promote it, by sharing it with your colleagues and friends using your social accounts. Have a great day.
How do you take care of scd 1 , scd 2 etc ? Coming from traditional SQL server background, this all sounds f'ed up to me. They talk about external table, then views , then CSV files 😯 ?? W t f is going on
Very helpful and clear. Thanks for sharing this.
Thank you this amazing video!!
@CloudLunchLearn
Ай бұрын
Thank you for your support Ahmed. We appreciate it. If you have a few minutes, please share the session on your social so your friends and contacts can watch it as well. And you can tag the Cloud Lunch and Learn group. Have a great day!
@AHMEDALDAFAAE1
Ай бұрын
@@CloudLunchLearn I did already. Thank you 😊
Great session - really like your "show it for real" demo style. So the use of external tables in this session is primarily to easily transform CSV to Parquet format? Also, for adding new partitions to the view, it relies on the Parquet file remaining after a Drop External table has been issued?
@CloudLunchLearn
Жыл бұрын
Thank you @George. We are happy you enjoyed it. Please help us promote it, by sharing it with your colleagues and friends using your social accounts. Have a great day.
nice video, a shame the resolution.
@CloudLunchLearn
Жыл бұрын
Thank you @Joshua. We are happy you enjoyed it. Please help us promote it, by sharing it with your colleagues and friends using your social accounts. Have a great day.
Great, thanks for sharing! I was trying to implement a DataVault logical DWH but there is no hashbytes :| I hope this feature will be supported in the future
@DatahaiBI
2 жыл бұрын
Hi, the HASHBYTES function is now supported in Serverless SQL Pools
@CloudLunchLearn
Жыл бұрын
Thank you @Rodrigo. We are happy you enjoyed it. Please help us promote it, by sharing it with your colleagues and friends using your social accounts. Have a great day.
Super nice !! I is there any way to have the scripts and source files ? Thank you !!
Can you please give us the csv files
Would looping through a series of dates be synchronous? Is there a way to do that asynchronously?
@DatahaiBI
2 жыл бұрын
This could be done using Pipelines/Data Factory to iterate and trigger the SP asynchronously. The external table name would need to be unique (dynamic sql to generate the table creation syntax)
I m surprised I can't see anything
Your intro is too long