Delta Live Tables A to Z: Best Practices for Modern Data Pipelines

Ғылым және технология

Join Databricks' Distinguished Principal Engineer Michael Armbrust for a technical deep dive into how Delta Live Tables (DLT) reduces the complexity of data transformation and ETL. Learn what’s new; what’s coming; and how to easily master the ins-and-outs of DLT.
Michael will describe and demonstrate:
- What’s new in Delta Live Tables (DLT) - Enzyme, Enhanced Autoscaling, and more
- How to easily create and maintain your DLT pipelines
- How to monitor pipeline operations
- How to optimize data for analytics and ML
- Sneak Peek into the DLT roadmap
Talk by: Michael Armbrust
Connect with us: Website: databricks.com
Twitter: / databricks
LinkedIn: / databricks
Instagram: / databricksinc
Facebook: / databricksinc

Пікірлер: 27

  • @stevequan7306
    @stevequan73068 ай бұрын

    This is the Bible for DLT! Worth to loop and study! Well done🙌

  • @jonathanduran2921
    @jonathanduran29218 ай бұрын

    Ha, the CEO knowing where the raw data is stored.. almost died laughing there.

  • @hapslab

    @hapslab

    5 ай бұрын

    #databricks is an ecosystem now. Helped by all its amazing creators. Proud to be associated since 2015❤

  • @mrliuquantong4943
    @mrliuquantong49439 ай бұрын

    Excellent Demo! Would you please provide the PDF file of this demo as well as the code for us to practise? looking forward to hearing from you.

  • @henryeleonu6237
    @henryeleonu62378 ай бұрын

    interesting! I now have an idea of what delta live tables can do

  • @smedegaardpedersen
    @smedegaardpedersen9 ай бұрын

    Super good stuff. I wonder if the the function call inside the loop @1:13:22 should have been `create_report(r)` instead of `create_table(r)`?

  • @TheDataArchitect
    @TheDataArchitect6 ай бұрын

    43:10 this is awesome man.

  • @mateen161
    @mateen1617 ай бұрын

    Would it be possible to create unmanaged tables with a location in datalake using DLT pipelines ?

  • @user-kx6ke9oy3v
    @user-kx6ke9oy3v10 ай бұрын

    where can i have the PPT? and demo code?

  • @web3tel
    @web3tel8 ай бұрын

    I am not sure I understood the repeating references to the "errors in our docs"? Can you please clarify? What would be a reasone to publish docs with the errors, please? Is there quality control over these docs?

  • @user-kr1bf7vd3r
    @user-kr1bf7vd3r6 ай бұрын

    @michaelarmbrust2076 While using apply_changes, how do we handle duplicates in the sequence by column in a stateless way? Does dropDuplicates deduplicate data for the micro-batch like a forEachBatch would? or would it attempt to deduplicate the whole stream unless a watermark is given?

  • @user-nv9fv2up5d
    @user-nv9fv2up5d2 ай бұрын

    Quick Question : If a record is deleted from Source table hard delete how apply_changes cdc will handle ?

  • @Rothbardo
    @Rothbardo8 ай бұрын

    anyone have a link to the slides?

  • @TheDataArchitect
    @TheDataArchitect6 ай бұрын

    37:10 no azure storage accounts?

  • @user-kx6ke9oy3v
    @user-kx6ke9oy3v8 ай бұрын

    question here, why i run the same will get error "16:08:48 Running with dbt=1.6.2 16:08:49 Registered adapter: databricks=1.6.4 16:08:49 Unable to do partial parsing because saved manifest not found. Starting full parse. 16:08:51 Found 2 models, 0 sources, 0 exposures, 0 metrics, 471 macros, 0 groups, 0 semantic models 16:08:51 16:14:02 Concurrency: 8 threads (target='databricks_cluster') 16:14:02 16:14:02 1 of 2 START sql streaming_table model default.device .......................... [RUN] 16:14:03 1 of 2 OK created sql streaming_table model default.device ..................... [OK in 0.53s] 16:14:03 2 of 2 START sql materialized_view model default.device_activity ............... [RUN] 16:14:04 2 of 2 ERROR creating sql materialized_view model default.device_activity ...... [ERROR in 0.82s] 16:14:04 16:14:04 Finished running 1 streaming_table model, 1 materialized_view model in 0 hours 5 minutes and 12.60 seconds (312.60s). 16:14:04 16:14:04 Completed with 1 error and 0 warnings: 16:14:04 16:14:04 Runtime Error in model device_activity (models/example/device_activity.sql) [TABLE_OR_VIEW_NOT_FOUND] The table or view `main`.`default`.`device` cannot be found. Verify the spelling and correctness of the schema and catalog. If you did not qualify the name with a schema, verify the current_schema() output, or qualify the name with the correct schema and catalog." from my understanding the table only can created by DLT pipeline, DBT cannot create the table. but you succesd in create the streaming table and MV. May i know why?

  • @irfana398
    @irfana3989 ай бұрын

    Why can't we run the code in the cell for debugging? I have found DLTs have so much limitation and hard to debug.

  • @alirezahassani3767

    @alirezahassani3767

    9 ай бұрын

    I had been eagerly anticipating the release of this feature for this year. Hopefully, they will add it soon.

  • @michaelarmbrust2076

    @michaelarmbrust2076

    7 ай бұрын

    We are working on a debugging experience that will be integrated with notebooks.

  • @saravananharisamy8085
    @saravananharisamy80855 ай бұрын

    Please share the repo for cicd atleast

  • @oleksiy8105
    @oleksiy81055 ай бұрын

    Straming=is always costly... If you trigger it manually or on schedule it is not streaming...

  • @spitfirexvii
    @spitfirexvii6 ай бұрын

    John Carmack, is that you?

  • @jhonsen9842
    @jhonsen9842Ай бұрын

    This is the way how you can make Data engineer job easy and pay less to them.

  • @VerySeriousMan
    @VerySeriousMan5 ай бұрын

    Hard to follow unless you know a lot already.

  • @msftora3
    @msftora33 ай бұрын

    just another stereotype reinvention of a wheel

Келесі