admin
No description.Please update your profile.
Azure Data Engineers integrate, transform, and consolidate data from various structured and unstructured data systems into structures that are suitable for building analytics solutions.
Candidates for Exam DP-203: Data Engineering on Microsoft Azure must have solid knowledge of data processing languages, such as SQL, Python, or Scala, and they need to understand parallel processing and data architecture patterns.
Beta exams are not scored immediately because we are gathering data on the quality of the questions and the exam. Learn more about the value and importance of beta exams.
Part of the requirements for: Microsoft Certified: Azure Data Engineer Associate
Related exams: none
Important: See details
Languages: English
Retirement date: none
This exam measures your ability to accomplish the following technical tasks: design and implement data storage; design and develop data processing; design and implement data security; and monitor and optimize data storage and data processing.
Skills measured
Design and implement data storage (40-45%)
Design and develop data processing (25-30%)
Design and implement data security (10-15%)
Monitor and optimize data storage and data processing (10-15%)
Audience Profile
Candidates for this exam should have subject matter expertise integrating, transforming, and consolidating data from various structured and unstructured data systems into a structure that is suitable for building analytics solutions.
Azure Data Engineers help stakeholders understand the data through exploration, and they build and maintain secure and compliant data processing pipelines by using different tools and techniques. These professionals use various Azure data services and languages to store and produce cleansed and enhanced datasets for analysis.
Azure Data Engineers also help ensure that data pipelines and data stores are high-performing, efficient, organized, and reliable, given a set of business requirements and constraints. They deal with unanticipated issues swiftly, and they minimize data loss. They also design, implement, monitor, and optimize data platforms to meet the data pipelines needs.
A candidate for this exam must have strong knowledge of data processing languages such as SQL, Python, or Scala, and they need to understand parallel processing and data architecture patterns.
Skills Measured
NOTE: The bullets that follow each of the skills measured are intended to illustrate how we
assess that skill. This list is not definitive or exhaustive.
NOTE: Most questions cover features that are General Availability (GA). The exam may contain questions on Preview features, if those features are commonly used.
Design and Implement Data Storage (40-45%)
Design a data storage structure
design an Azure Data Lake solution
recommend file types for storage
recommend file types for analytical queries
design for efficient querying
design for data pruning
design a folder structure that represents the levels of data transformation
design a distribution strategy
design a data archiving solution
Design a partition strategy
design a partition strategy for files
design a partition strategy for analytical workloads
design a partition strategy for efficiency/performance
design a partition strategy for Azure Synapse Analytics
identify when partitioning is needed in Azure Data Lake Storage Gen2
Design the serving layer
design star schemas
design slowly changing dimensions
design a dimensional hierarchy
design a solution for temporal data
design for incremental loading
design analytical stores
design metastores in Azure Synapse Analytics and Azure Databricks
Implement physical data storage structures
implement compression
implement partitioning
implement sharding
implement different table geometries with Azure Synapse Analytics pools
implement data redundancy
implement distributions
implement data archiving
Implement logical data structures
build a temporal data solution
build a slowly changing dimension
build a logical folder structure
build external tables
implement file and folder structures for efficient querying and data pruning
Implement the serving layer
deliver data in a relational star schema
deliver data in Parquet files
maintain metadata
implement a dimensional hierarchy
Design and Develop Data Processing (25-30%)
Ingest and transform data
transform data by using Apache Spark
transform data by using Transact-SQL
transform data by using Data Factory
transform data by using Azure Synapse Pipelines
transform data by using Stream Analytics
cleanse data
split data
shred JSON
encode and decode data
configure error handling for the transformation
normalize and denormalize values
transform data by using Scala
perform data exploratory analysis
Design and develop a batch processing solution
develop batch processing solutions by using Data Factory, Data Lake, Spark, Azure
Synapse Pipelines, PolyBase, and Azure Databricks
create data pipelines
design and implement incremental data loads
design and develop slowly changing dimensions
handle security and compliance requirements
scale resources
configure the batch size
design and create tests for data pipelines
integrate Jupyter/IPython notebooks into a data pipeline
handle duplicate data
handle missing data
handle late-arriving data
upsert data
regress to a previous state
design and configure exception handling
configure batch retention
design a batch processing solution
debug Spark jobs by using the Spark UI
Design and develop a stream processing solution
develop a stream processing solution by using Stream Analytics, Azure Databricks, and Azure Event Hubs
process data by using Spark structured streaming
monitor for performance and functional regressions
design and create windowed aggregates
handle schema drift
process time series data
process across partitions
process within one partition
configure checkpoints/watermarking during processing
scale resources
design and create tests for data pipelines
optimize pipelines for analytical or transactional purposes
handle interruptions
design and configure exception handling
upsert data
replay archived stream data
design a stream processing solution
Manage batches and pipelines
trigger batches
handle failed batch loads
validate batch loads
manage data pipelines in Data Factory/Synapse Pipelines
schedule data pipelines in Data Factory/Synapse Pipelines
implement version control for pipeline artifacts
manage Spark jobs in a pipeline
Design and Implement Data Security (10-15%)
Design security for data policies and standards
design data encryption for data at rest and in transit
design a data auditing strategy
design a data masking strategy
design for data privacy
design a data retention policy
design to purge data based on business requirements
design Azure role-based access control (Azure RBAC) and POSIX-like Access Control List (ACL) for Data Lake Storage Gen2
design row-level and column-level security
Implement data security
implement data masking
encrypt data at rest and in motion
implement row-level and column-level security
implement Azure RBAC
implement POSIX-like ACLs for Data Lake Storage Gen2
implement a data retention policy
implement a data auditing strategy
manage identities, keys, and secrets across different data platform technologies
implement secure endpoints (private and public)
implement resource tokens in Azure Databricks
load a DataFrame with sensitive information
write encrypted data to tables or Parquet files
manage sensitive information
Monitor and Optimize Data Storage and Data Processing (10-15%)
Monitor data storage and data processing
implement logging used by Azure Monitor
configure monitoring services
measure performance of data movement
monitor and update statistics about data across a system
monitor data pipeline performance
measure query performance
monitor cluster performance
understand custom logging options
schedule and monitor pipeline tests
interpret Azure Monitor metrics and logs
interpret a Spark directed acyclic graph (DAG)
Optimize and troubleshoot data storage and data processing
compact small files
rewrite user-defined functions (UDFs)
handle skew in data
handle data spill
tune shuffle partitions
find shuffling in a pipeline
optimize resource management
tune queries by using indexers
tune queries by using cache
optimize pipelines for analytical or transactional purposes
optimize pipeline for descriptive versus analytical workloads
troubleshoot a failed spark job
troubleshoot a failed pipeline run
QUESTION 1
You need to implement the surrogate key for the retail store table. The solution must meet the sales transaction dataset requirements.
What should you create?
A. a table that has an IDENTITY property
B. a system-versioned temporal table
C. a user-defined SEQUENCE object
D. a table that has a FOREIGN KEY constraint
Correct Answer: A
QUESTION 2
You need to design an Azure Synapse Analytics dedicated SQL pool that meets the following requirements:
Can return an employee record from a given point in time.
Maintains the latest employee information.
Minimizes query complexity.
How should you model the employee data?
A. as a temporal table
B. as a SQL graph table
C. as a degenerate dimension table
D. as a Type 2 slowly changing dimension (SCD) table
Correct Answer: D
QUESTION 3
You have an enterprise-wide Azure Data Lake Storage Gen2 account. The data lake is accessible only
through an Azure virtual network named VNET1.
You are building a SQL pool in Azure Synapse that will use data from the data lake.
Your company has a sales team. All the members of the sales team are in an Azure Active Directory group
named Sales. POSIX controls are used to assign the Sales group access to the files in the data lake.
You plan to load data to the SQL pool every hour.
You need to ensure that the SQL pool can load the sales data from the data lake.
Which three actions should you perform? Each correct answer presents part of the solution.
NOTE: Each area selection is worth one point.
A. Add the managed identity to the Sales group.
B. Use the managed identity as the credentials for the data load process.
C. Create a shared access signature (SAS).
D. Add your Azure Active Directory (Azure AD) account to the Sales group.
E. Use the snared access signature (SAS) as the credentials for the data load process.
F. Create a managed identity.
Correct Answer: ADF
Actualkey Microsoft Azure DP-203 Exam pdf, Certkingdom Microsoft Azure DP-203 PDF
Best Microsoft Azure DP-203 Certification, Microsoft Azure DP-203 Training at certkingdom.com
No description.Please update your profile.
@ 2016 Certkingdom.com-Developed By Certkingdom.com