Snowflake ADA-C01 Real Exam Questions
The questions for ADA-C01 were last updated at Dec 01,2024.
- Exam Code: ADA-C01
- Exam Name: SnowPro Advanced Administrator
- Certification Provider: Snowflake
- Latest update: Dec 01,2024
You need to provide a customer an action plan to update an ONTAP system after a security bulletin has been issued.
In this scenario, how do you accomplish this task?
- A . Use the Interoperability Matrix Tool.
- B . Review the ONTAP 9 product page.
- C . Download the target version and do a pre-check.
- D . Use Upgrade Advisor on Active IQ.
You need to provide a customer an action plan to update an ONTAP system after a security bulletin has been issued.
In this scenario, how do you accomplish this task?
- A . Use the Interoperability Matrix Tool.
- B . Review the ONTAP 9 product page.
- C . Download the target version and do a pre-check.
- D . Use Upgrade Advisor on Active IQ.
Which actions are considered breaking changes to data that is shared with consumers in the Snowflake Marketplace? (Select TWO).
- A . Dropping a column from a table
- B . Deleting data from a table
- C . Unpublishing the data listing
- D . Renaming a table
- E . Adding region availability to the listing
AD
Explanation:
According to the Snowflake documentation1, breaking changes are changes that affect the schema or structure of the shared data, such as dropping or renaming a column or a table. These changes may cause errors or unexpected results for the consumers who query the shared data. Deleting data from a table, unpublishing the data listing, or adding region availability to the listing are not breaking changes, as they do not alter the schema or structure of the shared data.
1: Managing Data Listings in Snowflake Data Marketplace | Snowflake Documentation
A Snowflake Administrator needs to set up Time Travel for a presentation area that includes facts and dimensions tables, and receives a lot of meaningless and erroneous loT data. Time Travel is being used as a component of the company’s data quality process in which the ingestion pipeline should revert to a known quality data state if any anomalies are detected in the latest load. Data from the past 30 days may have to be retrieved because of latencies in the data acquisition process.
According to best practices, how should these requirements be met? (Select TWO).
- A . Related data should not be placed together in the same schema. Facts and dimension tables should each have their own schemas.
- B . The fact and dimension tables should have the same DATA_RETENTION_TIME_IN_ DAYS.
- C . The DATA_RETENTION_TIME_IN_DAYS should be kept at the account level and never used for lower level containers (databases and schemas).
- D . Only TRANSIENT tables should be used to ensure referential integrity between the fact and dimension tables.
- E . The fact and dimension tables should be cloned together using the same Time Travel options to reduce potential referential integrity issues with the restored data.
BE
Explanation:
According to the Understanding & Using Time Travel documentation, Time Travel is a feature that allows you to query, clone, and restore historical data in tables, schemas, and databases for up to 90 days.
To meet the requirements of the scenario, the following best practices should be followed:
• The fact and dimension tables should have the same DATA_RETENTION_TIME_IN_DAYS. This parameter specifies the number of days for which the historical data is preserved and can be accessed by Time Travel. To ensure that the fact and dimension tables can be reverted to a consistent state in case of any anomalies in the latest load, they should have the same retention period. Otherwise, some tables may lose their historical data before others, resulting in data inconsistency and quality issues.
• The fact and dimension tables should be cloned together using the same Time Travel options to reduce potential referential integrity issues with the restored data. Cloning is a way of creating a copy of an object (table, schema, or database) at a specific point in time using Time Travel. To ensure that the fact and dimension tables are cloned with the same data set, they should be cloned together using the same AT or BEFORE clause. This will avoid any referential integrity issues that may arise from cloning tables at different points in time.
The other options are incorrect because:
• Related data should not be placed together in the same schema. Facts and dimension tables should each have their own schemas. This is not a best practice for Time Travel, as it does not affect the ability to query, clone, or restore historical data. However, it may be a good practice for data modeling and organization, depending on the use case and design principles.
• The DATA_RETENTION_TIME_IN_DAYS should be kept at the account level and never used for lower level containers (databases and schemas). This is not a best practice for Time Travel, as it limits the flexibility and granularity of setting the retention period for different objects. The retention period can be set at the account, database, schema, or table level, and the most specific setting overrides the more general ones. This allows for customizing the retention period based on the data needs and characteristics of each object.
• Only TRANSIENT tables should be used to ensure referential integrity between the fact and dimension tables. This is not a best practice for Time Travel, as it does not affect the referential integrity between the tables. Transient tables are tables that do not have a Fail-safe period, which means that they cannot be recovered by Snowflake after the retention period ends. However, they still support Time Travel within the retention period, and can be queried, cloned, and restored like permanent tables. The choice of table type depends on the data durability and availability requirements, not on the referential integrity.
A Snowflake Administrator needs to set up Time Travel for a presentation area that includes facts and dimensions tables, and receives a lot of meaningless and erroneous loT data. Time Travel is being used as a component of the company’s data quality process in which the ingestion pipeline should revert to a known quality data state if any anomalies are detected in the latest load. Data from the past 30 days may have to be retrieved because of latencies in the data acquisition process.
According to best practices, how should these requirements be met? (Select TWO).
- A . Related data should not be placed together in the same schema. Facts and dimension tables should each have their own schemas.
- B . The fact and dimension tables should have the same DATA_RETENTION_TIME_IN_ DAYS.
- C . The DATA_RETENTION_TIME_IN_DAYS should be kept at the account level and never used for lower level containers (databases and schemas).
- D . Only TRANSIENT tables should be used to ensure referential integrity between the fact and dimension tables.
- E . The fact and dimension tables should be cloned together using the same Time Travel options to reduce potential referential integrity issues with the restored data.
BE
Explanation:
According to the Understanding & Using Time Travel documentation, Time Travel is a feature that allows you to query, clone, and restore historical data in tables, schemas, and databases for up to 90 days.
To meet the requirements of the scenario, the following best practices should be followed:
• The fact and dimension tables should have the same DATA_RETENTION_TIME_IN_DAYS. This parameter specifies the number of days for which the historical data is preserved and can be accessed by Time Travel. To ensure that the fact and dimension tables can be reverted to a consistent state in case of any anomalies in the latest load, they should have the same retention period. Otherwise, some tables may lose their historical data before others, resulting in data inconsistency and quality issues.
• The fact and dimension tables should be cloned together using the same Time Travel options to reduce potential referential integrity issues with the restored data. Cloning is a way of creating a copy of an object (table, schema, or database) at a specific point in time using Time Travel. To ensure that the fact and dimension tables are cloned with the same data set, they should be cloned together using the same AT or BEFORE clause. This will avoid any referential integrity issues that may arise from cloning tables at different points in time.
The other options are incorrect because:
• Related data should not be placed together in the same schema. Facts and dimension tables should each have their own schemas. This is not a best practice for Time Travel, as it does not affect the ability to query, clone, or restore historical data. However, it may be a good practice for data modeling and organization, depending on the use case and design principles.
• The DATA_RETENTION_TIME_IN_DAYS should be kept at the account level and never used for lower level containers (databases and schemas). This is not a best practice for Time Travel, as it limits the flexibility and granularity of setting the retention period for different objects. The retention period can be set at the account, database, schema, or table level, and the most specific setting overrides the more general ones. This allows for customizing the retention period based on the data needs and characteristics of each object.
• Only TRANSIENT tables should be used to ensure referential integrity between the fact and dimension tables. This is not a best practice for Time Travel, as it does not affect the referential integrity between the tables. Transient tables are tables that do not have a Fail-safe period, which means that they cannot be recovered by Snowflake after the retention period ends. However, they still support Time Travel within the retention period, and can be queried, cloned, and restored like permanent tables. The choice of table type depends on the data durability and availability requirements, not on the referential integrity.
Which function is the role SECURITYADMIN responsible for that is not granted to role USERADMIN?
- A . Reset a Snowflake user’s password
- B . Manage system grants
- C . Create new users
- D . Create new roles
B
Explanation:
According to the Snowflake documentation1, the SECURITYADMIN role is responsible for managing all grants on objects in the account, including system grants. The USERADMIN role can only create and manage users and roles, but not grant privileges on other objects. Therefore, the function that is unique to the SECURITYADMIN role is to manage system grants.
Option A is incorrect because both roles can reset a user’s password.
Option C is incorrect because both roles can create new users.
Option D is incorrect because both roles can create new roles.
Which function is the role SECURITYADMIN responsible for that is not granted to role USERADMIN?
- A . Reset a Snowflake user’s password
- B . Manage system grants
- C . Create new users
- D . Create new roles
B
Explanation:
According to the Snowflake documentation1, the SECURITYADMIN role is responsible for managing all grants on objects in the account, including system grants. The USERADMIN role can only create and manage users and roles, but not grant privileges on other objects. Therefore, the function that is unique to the SECURITYADMIN role is to manage system grants.
Option A is incorrect because both roles can reset a user’s password.
Option C is incorrect because both roles can create new users.
Option D is incorrect because both roles can create new roles.
An Administrator has been asked to support the company’s application team need to build a loyalty program for its customers. The customer table contains Personal Identifiable Information (PII), and the application team’s role is DEVELOPER.
CREATE TABLE customer_data (
customer_first_name string,
customer_last_name string,
customer_address string,
customer_email string,
… some other columns,
);
The application team would like to access the customer data, but the email field must be obfuscated.
How can the Administrator protect the sensitive information, while maintaining the usability of the data?
- A . Create a view on the customer_data table to eliminate the email column by omitting it from the SELECT clause. Grant the role DEVELOPER access to the view.
- B . Create a separate table for all the non-Pll columns and grant the role DEVELOPER access to the new table.
- C . Use the CURRENT_ROLE and CURRENT_USER context functions to integrate with a secure view and filter the sensitive data.
- D . Use the CURRENT_ROLE context function to integrate with a masking policy on the fields that contain sensitive data.
What roles or security privileges will allow a consumer account to request and get data from the Data Exchange? (Select TWO).
- A . SYSADMIN
- B . SECURITYADMIN
- C . ACCOUNTADMIN
- D . IMPORT SHARE and CREATE DATABASE
- E . IMPORT PRIVILEGES and SHARED DATABASE
CD
Explanation:
According to the Accessing a Data Exchange documentation, a consumer account can request and get data from the Data Exchange using either the ACCOUNTADMIN role or a role with the IMPORT SHARE and CREATE DATABASE privileges. The ACCOUNTADMIN role is the top-level role that has all privileges on all objects in the account, including the ability to request and get data from the Data Exchange. A role with the IMPORT SHARE and CREATE DATABASE privileges can also request and get data from the Data Exchange, as these are the minimum privileges required to create a database from a share.
The other options are incorrect because:
• A. The SYSADMIN role does not have the privilege to request and get data from the Data Exchange, unless it is also granted the IMPORT SHARE and CREATE DATABASE privileges. The SYSADMIN role is a pre-defined role that has all privileges on all objects in the account, except for the privileges reserved for the ACCOUNTADMIN role, such as managing users, roles, and shares.
• B. The SECURITYADMIN role does not have the privilege to request and get data from the Data Exchange, unless it is also granted the IMPORT SHARE and CREATE DATABASE privileges. The SECURITYADMIN role is a pre-defined role that has the privilege to manage security objects in the account, such as network policies, encryption keys, and security integrations, but not data objects, such as databases, schemas, and tables.
• E. The IMPORT PRIVILEGES and SHARED DATABASE are not valid privileges in Snowflake. The correct privilege names are IMPORT SHARE and CREATE DATABASE, as explained above.
An Administrator wants to delegate the administration of a company’s data exchange to users who do not have access to the ACCOUNTADMIN role.
How can this requirement be met?
- A . Grant imported privileges on data exchange EXCHANGE_NAME to ROLE_NAME;
- B . Grant modify on data exchange EXCHANGE_NAME to ROLE_NAME;
- C . Grant ownership on data exchange EXCHANGE_NAME to ROLE NAME;
- D . Grant usage on data exchange EXCHANGE_NAME to ROLE_NAME;
D
Explanation:
In Snowflake, if an administrator wants to delegate the administration of the company’s data exchange to users who do not have the ACCOUNTADMIN role, they can grant those users USAGE permission on the data exchange. This allows the specified role to view the data exchange and request data without granting them the ability to modify or own it. This meets the requirement to delegate administration while maintaining appropriate separation of privileges.
Options A (Grant imported privileges on data exchange) and C (Grant ownership on data exchange) provide more permissions than necessary and may not be appropriate just for managing the data exchange. Option B (Grant modify on data exchange) might also provide unnecessary additional permissions, depending on what administrative tasks need to be performed. Typically, granting USAGE permission is the minimum necessary to meet basic administrative requirements.