AWS data engineering crash course with sample data & code includes amazon redshift, aws glue, amazon emr and managed airflow end to end pipeline : https://youtu.be/7xWS_b9XkGU
Course Transcript:
If you are absolute beginner then this course will give a good overview of the Amazon Redshift.
The goal is that after taking this course you should be comfortable in talking about Redshift. You should be able to participate in group discussions at your work place and understand solutions concerning Amazon Redshift.
We will start with the fundamentals :
Data Warehouse
MPP System
Columnar
Then we will see how these fundamentals are applicable to Amazon Redshift. We will see how parallelism is built as part of the core architecture in Redshift.
Amazon Redshift is a data warehouse offering by AWS (Amazon Web Services).
So what is a Data Warehouse ?
Data Warehouse is a system that allow users to complete 3 main tasks:
Mechanism to gather data from various sources
Provide tools to transform data and apply business logic on it
Enable business to take decisions by supporting Reports & Visualisations.
Massively Parallel Processing (MPP) system are built on mechanism of DIVIDE & CONQUER. The task is divided into multiple smaller & similar tasks by main node. The tasks are further given to delegates to complete. Once the delegates complete their tasks, they share the result with main node.
Summary:
Divide the work into smaller 'similar' tasks
individual teams work in silo to complete the task
"Main node" collate the tasks back into one output
Columnar database use different method of storing data in blocks when compared to traditional row-based storage databases. The columns are stored in same/adjacent storage blocks. This facilitates quick retrieval of data as only the blocks that store required columns are scanned and not all the blocks.
Summary:
Columns are stored in same/adjacent block
Efficient read when few columns are required
Better compression at column level
In this lesson , we will see how Amazon Redshift work as the data warehouse.
Gather data from various sources:
Export to S3 and run COPY command
JDBC connection to Source & load data into table
Amazon DataShare to bring data from another Redshift cluster
Use other services - Glue/Lambda/EMR to process and load data into Redshift
Use Lakeformation table as external table in Redshift
Apply business transformations
Allows you to run SQL on data in the tables
Can connect other AWS services like GLUE/EMR to process
Let you connect ETL tools to process data
Enable business to take decisions
Unload data into S3 bucket for downstream applications
Quicksight and other Reporting tools can connect for visualisation
Can share data via Datashare with other Redshift cluster.
Amazon Redshift architecture consists of 2 types of Nodes:
Leader Node
Compute Node
*There is a third type of node which is Spectrum Node which I will not cover as part of this beginners course.
The end-user will submit request to the Leader Node. There is one and only one leader node in the Amazon Redshift cluster. Leader node will break the task into smaller-similar tasks. These small tasks are passed to compute nodes for processing.
The compute nodes have their own memory & storage portion to complete the task. The compute nodes are divided into slices which are like "mini-computers" that actually process the data. Each compute node has at-least 1 slice depending on the node type in the redshift cluster.
Once the task is complete compute nodes sends the result back to leader node which collates all the result from different compute nodes. Once done, it passes the output to end users.
Amazon Redshift is a columnar database hence it is logically faster than many traditional RDBMS which are row-oriented for data analytics.
Stores data in columnar format
Redshift storage blocks are of 1 MB size
Multiple encoding algorithms are available like AZ64, LZO, ZSTD and more.
We now know that Amazon Redshift is a columnar database. However there is a standard manner which determines how table data is stored in the database.