Curriculum For This Course
Video tutorials list
-
Introduction
Video Name Time 1. Introduction 1:28 2. Curriculum 2:32 -
A look arounf Fabric
Video Name Time 1. Signing into Microsoft Fabric 5:12 2. Why do I need a Work email address? And how can I get one, if I don't have it? 13:49 3. Creating a Fabric capacity and configure Fabric-enabled workspace settings 7:02 4. Identify requirements for a Fabric solution and manage Fabric capacity 6:56 5. A quick tour of Fabric 8:55 -
Using Dataflow Gen2
Video Name Time 1. Ingest data by using a dataflow 8:22 2. Add a destination to a dataflow 7:08 3. Saving as a template and scheduling a dataflow 4:52 4. Implement Fast Copy when using dataflows 3:22 5. Monitor data transformation, identify and resolve errors using dataflows 6:21 6. 35. Optimize a dataflow 7:04 -
Transforming data using Dataflow Gen2
Video Name Time 1. The first part of the Home menu, including converting column data types 7:06 2. Removing rows/columns, and filtering and sorting data 8:30 3. Grouping and aggregating data, and duplicating and referencing queries 5:52 4. Denormalize data by Joining data together using Merge Queries 6:34 5. Unioning data using Append Queries 5:50 6. Identify and resolve duplicate data, missing data (null values) 8:51 7. Transforming data and adding additional columns 6:55 8. Practice Activity Number 1 - The Solution 9:21 -
Transforming data by using Power Query (M)
Video Name Time 1. Introducing the M language 8:42 2. M Number functions 9:14 3. M Text functions 6:51 4. M Date, Time and Duration functions 7:15 5. 27. M Group functions and removing rows 6:29 6. M Table functions 9:31 -
Using pipelines
Video Name Time 1. Ingest data by using a data pipeline, and adding other activities 7:05 2. Copy data by using a data pipeline 9:29 3. Schedule data pipelines and monitor data pipeline runs 4:27 4. Identifying and resolving pipeline errors, and optimizing a pipeline 6:41 5. Exploring sample data (including copy data assistant) + data pipeline templates 01:57 6. Practice Activity Number 2 - The Solution 9:03 -
Loading and saving data using notebooks
Video Name Time 1. Ingesting data into a lakehouse using a local upload 5:28 2. Choose an appropriate method for copying to a Lakehouse or Warehouse 3:03 3. Ingesting data using a notebook, and copying to a table 9:00 4. Saving data to a file or Lakehouse table 8:20 5. Loading data from a table in PySpark and SQL, and manipulating the results 7:16 6. Practice Activity Number 3 - The Solution 03:31 -
Manipulating dataframes- choosing columns and rows
Video Name Time 1. Reducing the number of columns shown 05:53 2. Filtering data with: where, limit and tail 07:14 3. Enriching data by adding new columns 3:07 4. Using Functions 7:41 5. More advanced filtering 7:28 6. Practice Activity Number 4 using PySpark - The Solution 8:37 7. Practice Activity Number 5 using SQL - The Solution 3:28 -
Converting data types, aggregating and sorting dataframes
Video Name Time 1. Converting data types 6:08 2. Importing data using an explicit data structure 3:58 3. Formatting dates as strings 6:42 4. Aggregating and re-filtering data 4:37 5. Sorting the results 5:52 6. Using all 6 SQL Clauses 4:49 7. Practice Activity Number 6 using PySpark - The Solution 6:35 8. Practice Activity Number 7 using SQL - The Solution 5:55 -
Transformation data in a lakehouse
Video Name Time 1. Merging data 8:12 2. Identifying and resolving duplicate data 5:11 3. Joining data using an Inner join 6:31 4. Joining data using other joins 6:44 5. Identifying missing data or null values 7:33 6. Practice Activity Number 8 using PySpark - The Solution 3:00 7. Practice Activity Number 9 using PySpark - The Solution 7:34 8. Practice Activity Number 10 using SQL - The Solution 8:38 -
Improving notebook performance and automating notebooks
Video Name Time 1. Schedule notebooks 02:50 2. Process data by using Spark structured streaming in a notebook 08:12 3. Testing the processing of streaming data in a notebook 04:00 4. Process data by using a Spark Job Definition 09:09 5. Choosing between a pipeline, a dataflow and a notebook 4:11 6. Implement parameters with notebooks and pipelines 7:19 7. Implement dynamic expressions with notebooks and pipelines 6:14 8. Practice Activity Number 11 - The Solution 8:03 -
Creating objects
Video Name Time 1. Create and manage shortcuts 5:55 2. Identify and resolve Shortcut errors 3:49 3. Configure OneLake workspace settings 3:02 4. Creating a Microsoft Azure SQL Database as a source 2:23 5. Implement file partitioning for analytics workloads using a pipeline 8:25 6. Implement file partitioning for analytics workloads - data is in a lakehouse 3:10 7. Implement mirroring of external databases 7:34 8. Practice Activity Number 12 - The Solution 3:41 -
Optimize performance in notebooks
Video Name Time 1. Identify and resolve data loading performance bottlenecks in notebooks 03:11 2. Implement performance improvements in notebooks, inc. V-Order 3:54 3. Identify and resolve issues with Delta table file: optimized writes 2:42 4. Optimize Spark performance 5:10 -
Transform data in a data warehouse
Video Name Time 1. Creating tables in a data warehouse 5:52 2. Inserting data into tables and transforming data in a Data Warehouse 6:58 3. Choose between dataflows, notebooks, and T-SQL for data transformation 3:26 4. Slowly changing dimensions - Theory 7:06 5. Implement Type 0 slowly changing dimensions - Practical Example 4:56 6. Implement Type 1 and Type 2 slowly changing dimensions - Practical Example 7:29 -
Creating incremental data loads
Video Name Time 1. Design an incremental data load from a Data Warehouse using a pipeline 5:00 2. Implement an incremental data load from a Data Warehouse using a pipeline 9:46 3. Test an incremental data load from a Data Warehouse using a pipeline 4:50 4. Implementing an incremental data loads using a Dataflow Gen2 9:02 -
Mange and Optimize Data Warehouse
Video Name Time 1. Creating a Premium Per User (PPU) workspace and Azure DevOps repos 6:53 2. Implement version control for a workspace 7:56 3. Implement database projects, including in source control 7:06 4. Implement dynamic data masking in a Data Warehouse - Video 1 6:32 5. Implement dynamic data masking in a Data Warehouse - Video 2 6:14 6. Optimize a data warehouse 7:10 7. Practice Activity Number 13 - The Solution 5:59 -
Creating an eventhouse
Video Name Time 1. Creating an eventhouse, exploring the environment, and getting data 5:39 2. Creating sample KQL and SQL queries, and exploring the query environment 7:52 -
Selecting, filtering and aggregating data using KQL
Video Name Time 1. Selecting data using KQL 7:12 2. Further selecting columns and ordering data using KQL 4:29 3. Limiting the number of rows 5:23 4. Practice Activity Number 14 - The Solution 9:04 5. Creating a string literal 4:57 6. Filtering for the entirety of a string 8:00 7. Filtering for part of a string 7:13 8. Aggregating data 8:05 9. Practice Activity Number 15 - The Solution 7:01 -
25 KQL Functions
Video Name Time 1. Empty strings, concatenating and trimming strings 8:51 2. Manipulating strings 8:25 3. Other string functions 1:16 4. Practice Activity Number 16 - The Solution 7:27 5. Number Data Types 6:37 6. Other Math Functions 4:05 7. datetime and timespan Data Types 5:12 8. datetime and timespan Functions 8:10 9. Practice Activity Number 17 - The Solution 6:04 -
Transforming data using KQL
Video Name Time 1. Merging data 4:07 2. Joining data 10:33 3. Practice Activity Number 18 - The Solution 5:18 4. Identify and resolve duplicate data, missing data, or null values 6:13 5. The iif/iff and case conditional functions 3:48 6. The OneLake data and real-time hubs + implementing OneLake integration 5:39 7. Practice Activity Number 19 - The Solution 4:57 -
Ingest and transform streaming data -eventstreams
Video Name Time 1. Choose an appropriate streaming engine 3:02 2. Processing data by using an eventstream 8:39 3. The Manage fields transform event in an eventstream 6:59 4. The Group by transform event, including Creating windowing functions 7:02 5. Completing our eventstream 7:24 -
Ingest and transform streaming data- Other objects
Video Name Time 1. Revising KQL Syntax 7:50 2. Creating a Fabric activator to run based on an event-based trigger 8:00 3. Ingest data by using continuous integration from OneLake - Part 2 9:49 4. Designing and implement an event-based trigger based on Azure Blob storage 2:50 5. Optimizing eventstreams and eventhouses 5:03 6. Native storage, mirrored storage, or shortcuts in Real-Time Intelligence 8:47 7. Choose between accelerated shortcuts and non-accelerated shortcuts 3:49 -
Workspace settings and Monitoring
Video Name Time 1. Spark workspace settings: starter and custom pools, and environments 8:51 2. Other Spark workspace settings 6:06 3. Configure domain workspace settings 10:23 4. Configure data workflow workspace settings 2:11 5. Recommend settings in the Fabric admin portal 4:04 6. Implement workspace and item-level access controls for Fabric items 5:03 7. Installing the Microsoft Fabric Capacity Metrics app 03:55 8. Using the Microsoft Fabric Capacity Metrics app - Manage Fabric capacity 7:13 9. Monitor semantic model refresh 3:26 10. Implement workspace logging 5:27 11. Workspace logging dashboards 6:06 12. Querying Workspace logs in KQL 6:55 -
Configuring security and governance, and deployment pipelines
Video Name Time 1. Apply sensitivity labels to items 7:04 2. Endorse items 4:57 3. Row-level security in a Data Warehouse 8:54 4. Column-level security in a Data Warehouse 6:40 5. Object-level security in a Data Warehouse 8:38 6. Folder-/File-level access controls in a Lakehouse 5:15 7. Creating a deployment pipeline 10:35 8. Configuring a deployment pipeline 6:43 -
Congratulations for completing the course
Video Name Time 1. What's Next? 1:13 2. Congratulations for completing the course 0:44
Add Comment