Shortcuts and data ingestion in Microsoft Fabric | DP-600 EXAM PREP (5 of 12)

Free DP-600 study notes inside community: www.skool.com/microsoft-fabri...
In this video (5 of 12 in the series), we begin Section 2 of the DP-600 Study Guide: Prepare and Serve Data. We start by exploring the following topics:
- Ingest data by using a data pipeline, dataflow, or notebook
- Copy data by using a data pipeline, dataflow, or notebook
- Choose an appropriate method for copying data from a Fabric data source to a lakehouse or warehouse
- Create and manage shortcuts
This video is part of the DP-600 Exam Preparation series: • DP-600 Exam Preparation
Timeline
0:00 Intro
1:42 Ingestion methods overview
3:13 Dataflow for data ingestion
5:52 Data pipeline for data ingestion
8:02 Fabric notebook for data ingestion (Spark)
10:24 Shortcuts overview
12:04 Shortcuts permissions
13:05 When to use which method?
15:09 Practice Questions
19:59 Outro and next steps
#microsoftfabric #dp600 #powerbi

Пікірлер: 30

  • @LearnMicrosoftFabric
    @LearnMicrosoftFabric23 күн бұрын

    Hey everyone, thanks for watching! How is your DP-600 studying going? 🤓 Please leave a LIKE and a COMMENT if you are finding this series useful in your preparation!

  • @lieuwewiskerke574
    @lieuwewiskerke5749 күн бұрын

    Hi Will, nice presentation. I think practice question 4 the answer should be B. To be able to create a shortcut access to table in lakehouse B. With viewer permission you only have access to the SQL end point. ViewAll access to the lakehouse would be sufficient, but that was not one of the options. Curious if I missed anything there.

  • @carlosnavia1361
    @carlosnavia13615 күн бұрын

    ✅ High quality content. Highly recommended.

  • @LearnMicrosoftFabric

    @LearnMicrosoftFabric

    4 күн бұрын

    Thanks! And Thanks for watching!

  • @Nekitamo190
    @Nekitamo1907 күн бұрын

    You actually can load data from within data pipeline to a data store located inside different workspace, it's just that the option of straightforward choice is not implemented in UI for some reason, but if you get the destination Workspace and Item ID parameters and put them inside appropriate fields, it get's the job done.

  • @LearnMicrosoftFabric

    @LearnMicrosoftFabric

    7 күн бұрын

    That is correct yes, they released an article yesterday showing this method, which is helpful! They are working on adding it to the UI 👍~ Here's the link for those that want to read more: blog.fabric.microsoft.com/en-US/blog/copy-data-from-lakehouse-in-another-workspace-using-data-pipeline/

  • @Nalaka-Wanniarachchi
    @Nalaka-Wanniarachchi23 күн бұрын

    Quality stuff.Good work Will.

  • @LearnMicrosoftFabric

    @LearnMicrosoftFabric

    23 күн бұрын

    Thanks! Thanks for watching, hope you found it useful 👍

  • @nazih7756
    @nazih775623 күн бұрын

    Good work Will.thanks

  • @LearnMicrosoftFabric

    @LearnMicrosoftFabric

    23 күн бұрын

    Thanks for watching!!

  • @happyheart9431
    @happyheart94319 күн бұрын

    Hello Will Thanks for your video. does PowerQuery have any data model size limitation? when import data

  • @cuilanzou8638
    @cuilanzou863823 күн бұрын

    Have booked dp-600 exam seat on 10th May. This video posted really perfect timing for the DP-600 exam for me.

  • @LearnMicrosoftFabric

    @LearnMicrosoftFabric

    23 күн бұрын

    Oh nice, best of luck for the exam, I should have a few more videos released before then too :)

  • @EllovdGriek
    @EllovdGriek22 күн бұрын

    Thank you for the content. I am looking for a way to efficient copy data from a on-prem database to a 'bronze' layer. Is there a workaround for the fact that parameterization of dataflows is not possible (yet)

  • @LearnMicrosoftFabric

    @LearnMicrosoftFabric

    22 күн бұрын

    Hey there, thanks for comment! I don't think the lack of (external) parameterization in dataflow is a blocker for what you describe? Just have to set it up manually, which is a bit more effort to setup (and also maintain, if you on-prem db changes structure regularly).

  • @osmanbaba1485
    @osmanbaba148523 күн бұрын

    Hi Will, what’s your opinion on Exam dumps do you think they’re viable or outdated?

  • @LearnMicrosoftFabric

    @LearnMicrosoftFabric

    23 күн бұрын

    Sorry what do you mean by exam dumps?

  • @VinayakKommana
    @VinayakKommana23 күн бұрын

    In Practice question 3, for option D, does Warehouse supports directly reading data from ADLS Gen 2? I thought COPY INTO can only be used if file is present in lakehouse or somewhere within Fabric

  • @LearnMicrosoftFabric

    @LearnMicrosoftFabric

    23 күн бұрын

    Yes, like this: learn.microsoft.com/en-us/fabric/data-warehouse/tutorial-load-data

  • @VinayakKommana

    @VinayakKommana

    23 күн бұрын

    @@LearnMicrosoftFabric ohh got it, it is similar to how it was in Synapse! Thanks Will

  • @moeeljawad5361
    @moeeljawad536119 күн бұрын

    Hello Will, imagine that i have historic json files (thousands of them that would add up to a couple hundreds of GBs). I need i append and save them to a data lakehouse for later consumption in Power BI. I believe that the notebook is the way to go, as pipelines can't get data from local files, and dataflow will suffer with such amount of Data, am i right? Another question is about the ability of power bi to connect to such amount of data in a lakehouse, will the report work, and will it be fast, taking into account that the connection would be a direct lake.

  • @LearnMicrosoftFabric

    @LearnMicrosoftFabric

    19 күн бұрын

    Yeh sounds like a job for a notebook 👍 and yes, should be pretty quick with Direct Lake. 200GB of JSON will compress a lot by the time it's in Lakehouse delta table. Give it a try and find out 👍

  • @mkj256

    @mkj256

    19 күн бұрын

    @@LearnMicrosoftFabricMaybe more of a Spark Questions: Consider that the user have an incoming file every week? Logically, he will go and and schedule the notebook to run every week, to append the new file to the delta table. My question is: will the appending proces to the delta table require a read of the delta table in the noteook, or will he be able to append to the delta table, without reading it first? I am concerned about the appending process time every week, will it be too long? Thanks.

  • @jafarhussain4665
    @jafarhussain466514 күн бұрын

    Hi Will Thanks for the wonderful Video Can you please Upload a video to fetch data from a given API and storing it into Fabric Warehouse I am trying to take this as a substitute for informatica where the newly generated data from the API should merge to the fabric warehouse after every daily schedule. Please explain this with a live API so that I can create a proper flow. Thank you

  • @LearnMicrosoftFabric

    @LearnMicrosoftFabric

    13 күн бұрын

    Hi if you go through some of my older videos on the channel I talk through a rest api example 👍

  • @jafarhussain4665

    @jafarhussain4665

    10 күн бұрын

    @@LearnMicrosoftFabric Thank you Will for the update

  • @Han-ve8uh
    @Han-ve8uh4 күн бұрын

    16:55 Where is it mentioned that transformations must be done in dataflow, and it can't leave the source data alone? Can't we use dataflow with no transformations done, or hack some int->float->int useless transforms if it must have some steps.

  • @LearnMicrosoftFabric

    @LearnMicrosoftFabric

    4 күн бұрын

    You can have a dataflow with no transformations, but I don't think it's possible to export a JSON file from a dataflow

  • @Karenshow
    @Karenshow23 күн бұрын

    Could you do a video in Database mirroring- Snowflake thanks

  • @LearnMicrosoftFabric

    @LearnMicrosoftFabric

    22 күн бұрын

    Hi Karen! I hope to cover database mirroring in more detail in the future, but full transparency it won't be for at least another month! I know other KZreadrs have videos on it though might be worth a search!

Келесі