Duration
21 hours (usually 3 days including breaks)
Requirements
There are no specific requirements needed to attend this course.
Overview
This bespoke course has been designed specifically against the requirements provided by Now Managed Learning Services
Course Outline
1. MySQL High Availability
2. Galera Cluster Architecture
3. Percona XtraDB cluster
4. Installation and configuration of Galera Cluster
5. Operations of Galera Cluster
6. Backup and Restore
7. Load Balancing
8. Performance tuning and monitoring
9. High Availability and Scalability with Galera Cluster
10. Security
11. Galera Cluster MultiDatacenter Replication
12. Troubleshooting
13. Integration with ProxySQL (nice to have)
14. Hands-on based sessions to learn and practice skills
Duration
14 hours (usually 2 days including breaks)
Overview
Percona Server for MySQL is optimized for cloud computing, NoSQL access, containers and modern hardware such as SSD and Flash storage.
- Cloud ready
Dramatically reduces downtime on servers with slow disks and large memory, such as 4XL EC2 servers on EBS volumes
- SaaS deployable
Increases flexibility for architectures such as co-located databases with hundreds of thousands of tables and heterogeneous backup and retention policies
- Vertical scalability and server consolidation
Scales to over 48 CPU cores, with the ability to achieve hundreds of thousands of I/O operations per second on high-end solid-state hardware
- Query, object and user level instrumentation
Detailed query logging with per-query statistics about locking, I/O, and query plan, as well as performance and access counters per-table, per-index, per-user, and per-host
- Enterprise ready
Percona Server for MySQL includes advanced, full-enabled external authentication, audit logging and threadpool scalability features that are only available in Oracle’s commercial MySQL Enterprise Edition
Course Outline
Percona Server Installation
- Choosing version
- Downloading installer (YUM, RPM)
- Installation on Linux machine
Percona Server Files and Scripts
- Percona/MySQL Programs
- Percona Server
- Percona Client
- GUI Tools
Percona Server Configuration
- The Server SQL Mode
- Server System Variables
- Dynamic System Variables
- Server Status Variables
- Shutdown Process
Percona Security Issues
- Securing Percona Server Against Attacks
- Security-Related Options
- Security Issues with LOAD DATA LOCAL
Percona Access Privilege System
- Percona Privilege System Overview
- Privileges Provided by Percona
- Connecting to the Percona Server – Stages
- Access Control, Stage 1: Connection Verification
- Access Control, Stage 2: Request Verification
- Access Denied Errors
Percona User Account Management
- Users and Passwords
- Creating New Users
- Deleting User Accounts
- Limiting User Resources
- Changing Passwords
Percona Database Maintenance
- Backup and Recovery – dump vs. XtraBackup
- Point-in-Time Recovery
- Maintenance and Crash Recovery
- Getting Table Information
Percona Log Files
Error Log
- General Query Log
- Update Log
- Binary Log
- Slow Query Log
- Log File Maintenance and Rotation
Percona Query Cache
- The Concept of Query Cache
- Testing Query Cache with SELECT
- Configuring Query Cache
- Checking Query Cache Status and Maintenance
Installing and starting Percona XtraDB Cluster
Duration
14 hours (usually 2 days including breaks)
Requirements
- Computer literacy
- Knowledge of any operating system
Overview
- How to build a query?
- What is a relational database?
- What is the structure and SQL commands?
Course Outline
Relational database models
- Relational operators
- Characteristics of declarative SQL language
- SQL syntax
- Division language DQL, DML, DDL, DCL
Data Query Language
- SELECT queries.
- Aliases columns of tables
- Service date (DATE types, display functions, formatting)
- Group Features
- Combining internal and external tables (JOIN clause)
- UNION operator
- Nested Subqueries (the WHERE clause, the table name, column name)
- Correlated subqueries
Data Modification Language
- Inserting rows (INSERT clause)
- Inserting rows by request
- Variation of the rows (UPDATE)
- Delete rows (DELETE)
Data Definition Language
- Creating, altering and dropping objects (CREATE, ALTER, DROP)
- Creating tables using subquery (CREATE TABLE …. AS SELECT…)
CONSTRAINTS
- Options NULL and NOT NULL
- CONSTRAINT clause
- ENUM type
- type SET
- PRIMARY KEY condition
- UNIQUE condition
- FOREIGN KEY condition
- DEFAULT clause
Transactions
Duration
7 hours (usually 1 day including breaks)
Requirements
Good SQL knowledge.
Overview
This course has been created for people already acquainted with SQL. The course introduces you into secrets common to all SQL databases as well as MySQL specific syntax, functions and features.
Course Outline
DQL (Data Query Language)
- Correlation in FROM, WHERE, SELECT and HAVING clauses
- Correlation and performance
- Using CASE, IF, COALESCE functions
- Using variables
- Casting and converting
- Dealing with NULL, NULL-safe operators
- Using regular expression with REGEXP operator
- Useful MySQL specific group by functions (GROUP_CONCAT, etc.)
- GROUP BY WITH ROLLUP
- EXISTS, ALL, ANY
- Multitable OUTER JOIN
- Rewriting subqueries as joins
DML (Data Modification Language)
- Multi-row inserts
- INSERT by SELECT
- Using subqueries in DML statements
- Using variables in DML queries
- Locking tables and rows
- Updating data in many tables
- IGNORE clause
- REPLACE clause
- DELETE versus TRUNCATE
DDL (Data Definition Language)
- Creating tables with select
- Temporary tables
Stored Procedures
- Short introduction to MySQL stored procedures
Duration
14 hours (usually 2 days including breaks)
Requirements
- An understanding of big data concepts (HDFS, Hive, etc.)
- An understanding of relational databases (MySQL, etc.)
- Experience with the Linux command line
Overview
Sqoop is an open source software tool for transfering data between Hadoop and relational databases or mainframes. It can be used to import data from a relational database management system (RDBMS) such as MySQL or Oracle or a mainframe into the Hadoop Distributed File System (HDFS). Thereafter, the data can be transformed in Hadoop MapReduce, and then re-exported back into an RDBMS.
In this instructor-led, live training, participants will learn how to use Sqoop to import data from a traditional relational database to Hadoop storage such HDFS or Hive and vice versa.
By the end of this training, participants will be able to:
- Install and configure Sqoop
- Import data from MySQL to HDFS and Hive
- Import data from HDFS and Hive to MySQL
Audience
- System administrators
- Data engineers
Format of the Course
- Part lecture, part discussion, exercises and heavy hands-on practice
Note
- To request a customized training for this course, please contact us to arrange.
Course Outline
Introduction
- Moving data from legacy data stores to Hadoop
Installing and Configuring Sqoop
Overview of Sqoop Features and Architecture
Importing Data from MySQL to HDFS
Importing Data from MySQL to Hive
Transforming Data in Hadoop
Importing Data from HDFS to MySQL
Importing Data from Hive to MySQL
Importing Incrementally with Sqoop Jobs
Troubleshooting
Summary and Conclusion