Duration
14 hours (usually 2 days including breaks)
Requirements
- An understanding of data analysis
- Experience with Microsoft SQL Server
Overview
SSAS (SQL Server Analysis Services), is a Microsoft SQL Server transactional processing (OLAP) and data mining tool for analyzing data across multiple databases, tables or files. The semantic data models provided by SSAS are used by client applications such as Power BI, Excel, Reporting Services, and other data visualization tools.
In this instructor-led, live training (onsite or remote), participants will learn how to use SSAS to analyze large volumes of data in databases and data warehouses.
By the end of this training, participants will be able to:
- Install and configure SSAS
- Understand the relationship between SSAS, SSIS, and SSRS
- Apply multidimensional data modeling to extract business insights from data
- Design OLAP (Online Analytical Processing) cubes
- Query and manipulate multidimensional data using the MDX (Multidimensional Expressions) query language
- Deploy real-world BI solutions using SSAS
Audience
- BI (Business Intelligence) professionals
- Data Analysts
- Database and data warehousing professionals
Format of the Course
- Interactive lecture and discussion
- Lots of exercises and practice
- Hands-on implementation in a live-lab environment
Course Customization Options
- This training is based on the latest version of Microsoft SQL Server and SSAS.
- To request a customized training for this course, please contact us to arrange.
Course Outline
Introduction
Installing and Configuring SSAS
Overview of SSAS Features and Architecture
Data Insight and Business Intelligence
Operational Analytics
New functionality Column store Index
Querying Data in SSAS Tabular
Querying Multidimensional Data
Enhanced SSIS
Enhanced MDS
Troubleshooting
Summary and Conclusion
Duration
14 hours (usually 2 days including breaks)
Requirements
Knowledge of Windows, basic knowledge of SQL and relational databases.
Overview
Training is dedicated to the basics of create a data warehouse environment based on MS SQL Server 2008.
Course participant gain the basis for the design and construction of a data warehouse that runs on MS SQL Server 2008.
Gain knowledge of how to build a simple ETL process based on the SSIS and then design and implement a data cube using SSAS.
The participant will be able to manage OLAP database: create and delete database OLAP Processing a partition changes on-line.
The participant will acquire knowledge of scripting XML / A and MDX.
Course Outline
- basis, objectives and application of data warehouse, data warehouse server types
- base building ETL processes in SSIS
- basic design data cubes in an Analysis Services: measure group measure
- dimensions, hierarchies, attributes,
- development of the project data cubes: measures calculated, partitions, perspectives, translations, actions, KPIs,
- Build and deploy, processing a partition
- the base XML / A: Partitioning, processes and overall Incremental, delete partitions, processes of aggregation,
- base MDX language
Duration
21 hours (usually 3 days including breaks)
Requirements
Overview
Cloudera Impala is an open source massively parallel processing (MPP) SQL query engine for Apache Hadoop clusters.
Impala enables users to issue low-latency SQL queries to data stored in Hadoop Distributed File System and Apache Hbase without requiring data movement or transformation.
Audience
This course is aimed at analysts and data scientists performing analysis on data stored in Hadoop via Business Intelligence or SQL tools.
After this course delegates will be able to
- Extract meaningful information from Hadoop clusters with Impala.
- Write specific programs to facilitate Business Intelligence in Impala SQL Dialect.
- Troubleshoot Impala.
Course Outline
Introduction to Impala
- What is Impala?
- How Impala Differs from Relational Databases
- Limitations and Future Directions
- Using the Impala Shell
- The Impala Daemon, Statestore and Catalogue service
Loading Impala
- Explore a New Impala Instance
- Load CSV Data from Local Files
- Point an Impala Table at Existing Data Files
Analyzing Data with Impala
- Describe the Impala Table
- Basic Syntax and Querying
- Data Types
- Filtering, Sorting, and Limiting Results
- Joining and Grouping Data
- Data Loading and Querying Examples
- Improving Impala Performance
- How Impala works with Hadoop file formats
- Hands-On Exercise: Interactive Analysis with Impala
Programming Impala Applications
- Overview of the Impala SQL Dialect
- Overview of Impala Programming Interfaces
Troubleshooting Impala
- Troubleshooting Impala SQL Syntax Issues
- Troubleshooting I/O Capacity Problems
- Impala Web User Interface for Debugging
Duration
21 hours (usually 3 days including breaks)
Requirements
A basic aptitude for making sense out of data.
Overview
KNIME Analytics Platform is a leading open source option for data-driven innovation, helping you discover the potential hidden in your data, mine for fresh insights, or predict new futures. With more than 1000 modules, hundreds of ready-to-run examples, a comprehensive range of integrated tools, and the widest choice of advanced algorithms available, KNIME Analytics Platform is the perfect toolbox for any data scientist and business analyst.
This course for KNIME Analytics Platform is an ideal opportunity for beginners, advanced users and KNIME experts to be introduced to KNIME, to learn how to use it more effectively, and how to create clear, comprehensive reports based on KNIME workflows
Certification
NobleProg and KNIME design, build and deliver end-to-end advanced analytics solutions that are customized to each customer’s business needs.
By combining KNIME’s leading open solution for data driven innovation with NobleProg’s domain and technical expertise in analytics, we help our customers reduce costs and gain data-driven insights for better business outcomes.
Course Outline
Getting Started with KNIME
- KNIME Analytics Platform Overview
- Installation
- GUI Based Programming
- Workflows
- Nodes
- Importing Workflows
Data Access
- Reading From A Text File
- Database Access
- External REST Services
- Artificial Data Generation
Visualization
- Data Visualization
- JavaScript Based Visualization
- Data Views
- Highlighting
- Graphics Through R
- ETL And Data Manipulation
Data Filtering
- Data Aggregations
- Concatenation And Join
- Data Transformations
- Workflow Clean Up
- Datetime Manipulation
- In-Database Processing
Data Mining
- Process Overview
- Training And Applying A Decision Tree
- Scoring A Model
- PMML
- K-Means Clustering
- Recommendation Engines
- Other Models Available
Reporting
- Overview
- BIRT Integration
- KNIME WebPortal
Conclusion