Big Data Business Intelligence for Criminal Intelligence Analysis Training Course

Duration

35 hours (usually 5 days including breaks)

Requirements

  • Knowledge of law enforcement processes and data systems
  • Basic understanding of SQL/Oracle or relational database
  • Basic understanding of statistics (at Spreadsheet level)

Overview

Advances in technologies and the increasing amount of information are transforming how law enforcement is conducted. The challenges that Big Data pose are nearly as daunting as Big Data’s promise. Storing data efficiently is one of these challenges; effectively analyzing it is another.

In this instructor-led, live training, participants will learn the mindset with which to approach Big Data technologies, assess their impact on existing processes and policies, and implement these technologies for the purpose of identifying criminal activity and preventing crime. Case studies from law enforcement organizations around the world will be examined to gain insights on their adoption approaches, challenges and results.

By the end of this training, participants will be able to:

  • Combine Big Data technology with traditional data gathering processes to piece together a story during an investigation
  • Implement industrial big data storage and processing solutions for data analysis
  • Prepare a proposal for the adoption of the most adequate tools and processes for enabling a data-driven approach to criminal investigation

Audience

  • Law Enforcement specialists with a technical background

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

Course Outline

=====
Day 01
=====
Overview of Big Data Business Intelligence for Criminal Intelligence Analysis

  • Case Studies from Law Enforcement – Predictive Policing
  • Big Data adoption rate in Law Enforcement Agencies and how they are aligning their future operation around Big Data Predictive Analytics
  • Emerging technology solutions such as gunshot sensors, surveillance video and social media
  • Using Big Data technology to mitigate information overload
  • Interfacing Big Data with Legacy data
  • Basic understanding of enabling technologies in predictive analytics
  • Data Integration & Dashboard visualization
  • Fraud management
  • Business Rules and Fraud detection
  • Threat detection and profiling
  • Cost benefit analysis for Big Data implementation

Introduction to Big Data

  • Main characteristics of Big Data — Volume, Variety, Velocity and Veracity.
  • MPP (Massively Parallel Processing) architecture
  • Data Warehouses – static schema, slowly evolving dataset
  • MPP Databases: Greenplum, Exadata, Teradata, Netezza, Vertica etc.
  • Hadoop Based Solutions – no conditions on structure of dataset.
  • Typical pattern : HDFS, MapReduce (crunch), retrieve from HDFS
  • Apache Spark for stream processing
  • Batch- suited for analytical/non-interactive
  • Volume : CEP streaming data
  • Typical choices – CEP products (e.g. Infostreams, Apama, MarkLogic etc)
  • Less production ready – Storm/S4
  • NoSQL Databases – (columnar and key-value): Best suited as analytical adjunct to data warehouse/database

NoSQL solutions

  • KV Store – Keyspace, Flare, SchemaFree, RAMCloud, Oracle NoSQL Database (OnDB)
  • KV Store – Dynamo, Voldemort, Dynomite, SubRecord, Mo8onDb, DovetailDB
  • KV Store (Hierarchical) – GT.m, Cache
  • KV Store (Ordered) – TokyoTyrant, Lightcloud, NMDB, Luxio, MemcacheDB, Actord
  • KV Cache – Memcached, Repcached, Coherence, Infinispan, EXtremeScale, JBossCache, Velocity, Terracoqua
  • Tuple Store – Gigaspaces, Coord, Apache River
  • Object Database – ZopeDB, DB40, Shoal
  • Document Store – CouchDB, Cloudant, Couchbase, MongoDB, Jackrabbit, XML-Databases, ThruDB, CloudKit, Prsevere, Riak-Basho, Scalaris
  • Wide Columnar Store – BigTable, HBase, Apache Cassandra, Hypertable, KAI, OpenNeptune, Qbase, KDI

Varieties of Data: Introduction to Data Cleaning issues in Big Data

  • RDBMS – static structure/schema, does not promote agile, exploratory environment.
  • NoSQL – semi structured, enough structure to store data without exact schema before storing data
  • Data cleaning issues

Hadoop

  • When to select Hadoop?
  • STRUCTURED – Enterprise data warehouses/databases can store massive data (at a cost) but impose structure (not good for active exploration)
  • SEMI STRUCTURED data – difficult to carry out using traditional solutions (DW/DB)
  • Warehousing data = HUGE effort and static even after implementation
  • For variety & volume of data, crunched on commodity hardware – HADOOP
  • Commodity H/W needed to create a Hadoop Cluster

Introduction to Map Reduce /HDFS

  • MapReduce – distribute computing over multiple servers
  • HDFS – make data available locally for the computing process (with redundancy)
  • Data – can be unstructured/schema-less (unlike RDBMS)
  • Developer responsibility to make sense of data
  • Programming MapReduce = working with Java (pros/cons), manually loading data into HDFS

=====
Day 02
=====
Big Data Ecosystem — Building Big Data ETL (Extract, Transform, Load) — Which Big Data Tools to use and when?

  • Hadoop vs. Other NoSQL solutions
  • For interactive, random access to data
  • Hbase (column oriented database) on top of Hadoop
  • Random access to data but restrictions imposed (max 1 PB)
  • Not good for ad-hoc analytics, good for logging, counting, time-series
  • Sqoop – Import from databases to Hive or HDFS (JDBC/ODBC access)
  • Flume – Stream data (e.g. log data) into HDFS

Big Data Management System

  • Moving parts, compute nodes start/fail :ZooKeeper – For configuration/coordination/naming services
  • Complex pipeline/workflow: Oozie – manage workflow, dependencies, daisy chain
  • Deploy, configure, cluster management, upgrade etc (sys admin) :Ambari
  • In Cloud : Whirr

Predictive Analytics — Fundamental Techniques and Machine Learning based Business Intelligence

  • Introduction to Machine Learning
  • Learning classification techniques
  • Bayesian Prediction — preparing a training file
  • Support Vector Machine
  • KNN p-Tree Algebra & vertical mining
  • Neural Networks
  • Big Data large variable problem — Random forest (RF)
  • Big Data Automation problem – Multi-model ensemble RF
  • Automation through Soft10-M
  • Text analytic tool-Treeminer
  • Agile learning
  • Agent based learning
  • Distributed learning
  • Introduction to Open source Tools for predictive analytics : R, Python, Rapidminer, Mahut

Predictive Analytics Ecosystem and its application in Criminal Intelligence Analysis

  • Technology and the investigative process
  • Insight analytic
  • Visualization analytics
  • Structured predictive analytics
  • Unstructured predictive analytics
  • Threat/fraudstar/vendor profiling
  • Recommendation Engine
  • Pattern detection
  • Rule/Scenario discovery – failure, fraud, optimization
  • Root cause discovery
  • Sentiment analysis
  • CRM analytics
  • Network analytics
  • Text analytics for obtaining insights from transcripts, witness statements, internet chatter, etc.
  • Technology assisted review
  • Fraud analytics
  • Real Time Analytic

=====
Day 03
=====
Real Time and Scalable Analytics Over Hadoop

  • Why common analytic algorithms fail in Hadoop/HDFS
  • Apache Hama- for Bulk Synchronous distributed computing
  • Apache SPARK- for cluster computing and real time analytic
  • CMU Graphics Lab2- Graph based asynchronous approach to distributed computing
  • KNN p — Algebra based approach from Treeminer for reduced hardware cost of operation

Tools for eDiscovery and Forensics

  • eDiscovery over Big Data vs. Legacy data – a comparison of cost and performance
  • Predictive coding and Technology Assisted Review (TAR)
  • Live demo of vMiner for understanding how TAR enables faster discovery
  • Faster indexing through HDFS – Velocity of data
  • NLP (Natural Language processing) – open source products and techniques
  • eDiscovery in foreign languages — technology for foreign language processing

Big Data BI for Cyber Security – Getting a 360-degree view, speedy data collection and threat identification

  • Understanding the basics of security analytics — attack surface, security misconfiguration, host defenses
  • Network infrastructure / Large datapipe / Response ETL for real time analytic
  • Prescriptive vs predictive – Fixed rule based vs auto-discovery of threat rules from Meta data

Gathering disparate data for Criminal Intelligence Analysis

  • Using IoT (Internet of Things) as sensors for capturing data
  • Using Satellite Imagery for Domestic Surveillance
  • Using surveillance and image data for criminal identification
  • Other data gathering technologies — drones, body cameras, GPS tagging systems and thermal imaging technology
  • Combining automated data retrieval with data obtained from informants, interrogation, and research
  • Forecasting criminal activity

=====
Day 04
=====
Fraud prevention BI from Big Data in Fraud Analytics

  • Basic classification of Fraud Analytics — rules-based vs predictive analytics
  • Supervised vs unsupervised Machine learning for Fraud pattern detection
  • Business to business fraud, medical claims fraud, insurance fraud, tax evasion and money laundering

Social Media Analytics — Intelligence gathering and analysis

  • How Social Media is used by criminals to organize, recruit and plan
  • Big Data ETL API for extracting social media data
  • Text, image, meta data and video
  • Sentiment analysis from social media feed
  • Contextual and non-contextual filtering of social media feed
  • Social Media Dashboard to integrate diverse social media
  • Automated profiling of social media profile
  • Live demo of each analytic will be given through Treeminer Tool

Big Data Analytics in image processing and video feeds

  • Image Storage techniques in Big Data — Storage solution for data exceeding petabytes
  • LTFS (Linear Tape File System) and LTO (Linear Tape Open)
  • GPFS-LTFS (General Parallel File System –  Linear Tape File System) — layered storage solution for Big image data
  • Fundamentals of image analytics
  • Object recognition
  • Image segmentation
  • Motion tracking
  • 3-D image reconstruction

Biometrics, DNA and Next Generation Identification Programs

  • Beyond fingerprinting and facial recognition
  • Speech recognition, keystroke (analyzing a users typing pattern) and CODIS (combined DNA Index System)
  • Beyond DNA matching: using forensic DNA phenotyping to construct a face from DNA samples

Big Data Dashboard for quick accessibility of diverse data and display :

  • Integration of existing application platform with Big Data Dashboard
  • Big Data management
  • Case Study of Big Data Dashboard: Tableau and Pentaho
  • Use Big Data app to push location based services in Govt.
  • Tracking system and management

=====
Day 05
=====
How to justify Big Data BI implementation within an organization:

  • Defining the ROI (Return on Investment) for implementing Big Data
  • Case studies for saving Analyst Time in collection and preparation of Data – increasing productivity
  • Revenue gain from lower database licensing cost
  • Revenue gain from location based services
  • Cost savings from fraud prevention
  • An integrated spreadsheet approach for calculating approximate expenses vs. Revenue gain/savings from Big Data implementation.

Step by Step procedure for replacing a legacy data system with a Big Data System

  • Big Data Migration Roadmap
  • What critical information is needed before architecting a Big Data system?
  • What are the different ways for calculating Volume, Velocity, Variety and Veracity of data
  • How to estimate data growth
  • Case studies

Review of Big Data Vendors and review of their products.

  • Accenture
  • APTEAN (Formerly CDC Software)
  • Cisco Systems
  • Cloudera
  • Dell
  • EMC
  • GoodData Corporation
  • Guavus
  • Hitachi Data Systems
  • Hortonworks
  • HP
  • IBM
  • Informatica
  • Intel
  • Jaspersoft
  • Microsoft
  • MongoDB (Formerly 10Gen)
  • MU Sigma
  • Netapp
  • Opera Solutions
  • Oracle
  • Pentaho
  • Platfora
  • Qliktech
  • Quantum
  • Rackspace
  • Revolution Analytics
  • Salesforce
  • SAP
  • SAS Institute
  • Sisense
  • Software AG/Terracotta
  • Soft10 Automation
  • Splunk
  • Sqrrl
  • Supermicro
  • Tableau Software
  • Teradata
  • Think Big Analytics
  • Tidemark Systems
  • Treeminer
  • VMware (Part of EMC)

Q/A session

Big Data Business Intelligence for Govt. Agencies Training Course

Duration

35 hours (usually 5 days including breaks)

Requirements

  • Basic knowledge of business operation and data systems in Govt. in their domain
  • Basic understanding of SQL/Oracle or relational database
  • Basic understanding of Statistics (at Spreadsheet level) 

Overview

Advances in technologies and the increasing amount of information are transforming how business is conducted in many industries, including government. Government data generation and digital archiving rates are on the rise due to the rapid growth of mobile devices and applications, smart sensors and devices, cloud computing solutions, and citizen-facing portals. As digital information expands and becomes more complex, information management, processing, storage, security, and disposition become more complex as well. New capture, search, discovery, and analysis tools are helping organizations gain insights from their unstructured data. The government market is at a tipping point, realizing that information is a strategic asset, and government needs to protect, leverage, and analyze both structured and unstructured information to better serve and meet mission requirements. As government leaders strive to evolve data-driven organizations to successfully accomplish mission, they are laying the groundwork to correlate dependencies across events, people, processes, and information.

High-value government solutions will be created from a mashup of the most disruptive technologies:

  • Mobile devices and applications
  • Cloud services
  • Social business technologies and networking
  • Big Data and analytics

IDC predicts that by 2020, the IT industry will reach $5 trillion, approximately $1.7 trillion larger than today, and that 80% of the industry’s growth will be driven by these 3rd Platform technologies. In the long term, these technologies will be key tools for dealing with the complexity of increased digital information. Big Data is one of the intelligent industry solutions and allows government to make better decisions by taking action based on patterns revealed by analyzing large volumes of data — related and unrelated, structured and unstructured.

But accomplishing these feats takes far more than simply accumulating massive quantities of data.“Making sense of thesevolumes of Big Datarequires cutting-edge tools and technologies that can analyze and extract useful knowledge from vast and diverse streams of information,” Tom Kalil and Fen Zhao of the White House Office of Science and Technology Policy wrote in a post on the OSTP Blog.

The White House took a step toward helping agencies find these technologies when it established the National Big Data Research and Development Initiative in 2012. The initiative included more than $200 million to make the most of the explosion of Big Data and the tools needed to analyze it.

The challenges that Big Data poses are nearly as daunting as its promise is encouraging. Storing data efficiently is one of these challenges. As always, budgets are tight, so agencies must minimize the per-megabyte price of storage and keep the data within easy access so that users can get it when they want it and how they need it. Backing up massive quantities of data heightens the challenge.

Analyzing the data effectively is another major challenge. Many agencies employ commercial tools that enable them to sift through the mountains of data, spotting trends that can help them operate more efficiently. (A recent study by MeriTalk found that federal IT executives think Big Data could help agencies save more than $500 billion while also fulfilling mission objectives.).

Custom-developed Big Data tools also are allowing agencies to address the need to analyze their data. For example, the Oak Ridge National Laboratory’s Computational Data Analytics Group has made its Piranha data analytics system available to other agencies. The system has helped medical researchers find a link that can alert doctors to aortic aneurysms before they strike. It’s also used for more mundane tasks, such as sifting through résumés to connect job candidates with hiring managers.

Course Outline

Each session is 2 hours

Day-1: Session -1: Business Overview of Why Big Data Business Intelligence in Govt.

  • Case Studies from NIH, DoE
  • Big Data adaptation rate in Govt. Agencies & and how they are aligning their future operation around Big Data Predictive Analytics
  • Broad Scale Application Area in DoD, NSA, IRS, USDA etc.
  • Interfacing Big Data with Legacy data
  • Basic understanding of enabling technologies in predictive analytics
  • Data Integration & Dashboard visualization
  • Fraud management
  • Business Rule/ Fraud detection generation
  • Threat detection and profiling
  • Cost benefit analysis for Big Data implementation

Day-1: Session-2 : Introduction of Big Data-1

  • Main characteristics of Big Data-volume, variety, velocity and veracity. MPP architecture for volume.
  • Data Warehouses – static schema, slowly evolving dataset
  • MPP Databases like Greenplum, Exadata, Teradata, Netezza, Vertica etc.
  • Hadoop Based Solutions – no conditions on structure of dataset.
  • Typical pattern : HDFS, MapReduce (crunch), retrieve from HDFS
  • Batch- suited for analytical/non-interactive
  • Volume : CEP streaming data
  • Typical choices – CEP products (e.g. Infostreams, Apama, MarkLogic etc)
  • Less production ready – Storm/S4
  • NoSQL Databases – (columnar and key-value): Best suited as analytical adjunct to data warehouse/database

Day-1 : Session -3 : Introduction to Big Data-2

NoSQL solutions

  • KV Store – Keyspace, Flare, SchemaFree, RAMCloud, Oracle NoSQL Database (OnDB)
  • KV Store – Dynamo, Voldemort, Dynomite, SubRecord, Mo8onDb, DovetailDB
  • KV Store (Hierarchical) – GT.m, Cache
  • KV Store (Ordered) – TokyoTyrant, Lightcloud, NMDB, Luxio, MemcacheDB, Actord
  • KV Cache – Memcached, Repcached, Coherence, Infinispan, EXtremeScale, JBossCache, Velocity, Terracoqua
  • Tuple Store – Gigaspaces, Coord, Apache River
  • Object Database – ZopeDB, DB40, Shoal
  • Document Store – CouchDB, Cloudant, Couchbase, MongoDB, Jackrabbit, XML-Databases, ThruDB, CloudKit, Prsevere, Riak-Basho, Scalaris
  • Wide Columnar Store – BigTable, HBase, Apache Cassandra, Hypertable, KAI, OpenNeptune, Qbase, KDI

Varieties of Data: Introduction to Data Cleaning issue in Big Data

  • RDBMS – static structure/schema, doesn’t promote agile, exploratory environment.
  • NoSQL – semi structured, enough structure to store data without exact schema before storing data
  • Data cleaning issues

Day-1 : Session-4 : Big Data Introduction-3 : Hadoop

  • When to select Hadoop?
  • STRUCTURED – Enterprise data warehouses/databases can store massive data (at a cost) but impose structure (not good for active exploration)
  • SEMI STRUCTURED data – tough to do with traditional solutions (DW/DB)
  • Warehousing data = HUGE effort and static even after implementation
  • For variety & volume of data, crunched on commodity hardware – HADOOP
  • Commodity H/W needed to create a Hadoop Cluster

Introduction to Map Reduce /HDFS

  • MapReduce – distribute computing over multiple servers
  • HDFS – make data available locally for the computing process (with redundancy)
  • Data – can be unstructured/schema-less (unlike RDBMS)
  • Developer responsibility to make sense of data
  • Programming MapReduce = working with Java (pros/cons), manually loading data into HDFS

Day-2: Session-1: Big Data Ecosystem-Building Big Data ETL: universe of Big Data Tools-which one to use and when?

  • Hadoop vs. Other NoSQL solutions
  • For interactive, random access to data
  • Hbase (column oriented database) on top of Hadoop
  • Random access to data but restrictions imposed (max 1 PB)
  • Not good for ad-hoc analytics, good for logging, counting, time-series
  • Sqoop – Import from databases to Hive or HDFS (JDBC/ODBC access)
  • Flume – Stream data (e.g. log data) into HDFS

Day-2: Session-2: Big Data Management System

  • Moving parts, compute nodes start/fail :ZooKeeper – For configuration/coordination/naming services
  • Complex pipeline/workflow: Oozie – manage workflow, dependencies, daisy chain
  • Deploy, configure, cluster management, upgrade etc (sys admin) :Ambari
  • In Cloud : Whirr

Day-2: Session-3: Predictive analytics in Business Intelligence -1: Fundamental Techniques & Machine learning based BI :

  • Introduction to Machine learning
  • Learning classification techniques
  • Bayesian Prediction-preparing training file
  • Support Vector Machine
  • KNN p-Tree Algebra & vertical mining
  • Neural Network
  • Big Data large variable problem -Random forest (RF)
  • Big Data Automation problem – Multi-model ensemble RF
  • Automation through Soft10-M
  • Text analytic tool-Treeminer
  • Agile learning
  • Agent based learning
  • Distributed learning
  • Introduction to Open source Tools for predictive analytics : R, Rapidminer, Mahut

Day-2: Session-4 Predictive analytics eco-system-2: Common predictive analytic problems in Govt.

  • Insight analytic
  • Visualization analytic
  • Structured predictive analytic
  • Unstructured predictive analytic
  • Threat/fraudstar/vendor profiling
  • Recommendation Engine
  • Pattern detection
  • Rule/Scenario discovery –failure, fraud, optimization
  • Root cause discovery
  • Sentiment analysis
  • CRM analytic
  • Network analytic
  • Text Analytics
  • Technology assisted review
  • Fraud analytic
  • Real Time Analytic

Day-3 : Sesion-1 : Real Time and Scalable Analytic Over Hadoop

  • Why common analytic algorithms fail in Hadoop/HDFS
  • Apache Hama- for Bulk Synchronous distributed computing
  • Apache SPARK- for cluster computing for real time analytic
  • CMU Graphics Lab2- Graph based asynchronous approach to distributed computing
  • KNN p-Algebra based approach from Treeminer for reduced hardware cost of operation

Day-3: Session-2Tools for eDiscovery and Forensics

  • eDiscovery over Big Data vs. Legacy data – a comparison of cost and performance
  • Predictive coding and technology assisted review (TAR)
  • Live demo of a Tar product ( vMiner) to understand how TAR works for faster discovery
  • Faster indexing through HDFS –velocity of data
  • NLP or Natural Language processing –various techniques and open source products
  • eDiscovery in foreign languages-technology for foreign language processing

Day-3 : Session 3: Big Data BI for Cyber Security –Understanding whole 360 degree views of speedy data collection to threat identification

  • Understanding basics of security analytics-attack surface, security misconfiguration, host defenses
  • Network infrastructure/ Large datapipe / Response ETL for real time analytic
  • Prescriptive vs predictive – Fixed rule based vs auto-discovery of threat rules from Meta data

Day-3: Session 4: Big Data in USDA : Application in Agriculture

  • Introduction to IoT ( Internet of Things) for agriculture-sensor based Big Data and control
  • Introduction to Satellite imaging and its application in agriculture
  • Integrating sensor and image data for fertility of soil, cultivation recommendation and forecasting
  • Agriculture insurance and Big Data
  • Crop Loss forecasting

Day-4 : Session-1: Fraud prevention BI from Big Data in Govt-Fraud analytic:

  • Basic classification of Fraud analytics- rule based vs predictive analytics
  • Supervised vs unsupervised Machine learning for Fraud pattern detection
  • Vendor fraud/over charging for projects
  • Medicare and Medicaid fraud- fraud detection techniques for claim processing
  • Travel reimbursement frauds
  • IRS refund frauds
  • Case studies and live demo will be given wherever data is available.

Day-4 : Session-2: Social Media Analytic- Intelligence gathering and analysis

  • Big Data ETL API for extracting social media data
  • Text, image, meta data and video
  • Sentiment analysis from social media feed
  • Contextual and non-contextual filtering of social media feed
  • Social Media Dashboard to integrate diverse social media
  • Automated profiling of social media profile
  • Live demo of each analytic will be given through Treeminer Tool.

Day-4 : Session-3: Big Data Analytic in image processing and video feeds

  • Image Storage techniques in Big Data- Storage solution for data exceeding petabytes
  • LTFS and LTO
  • GPFS-LTFS ( Layered storage solution for Big image data)
  • Fundamental of image analytics
  • Object recognition
  • Image segmentation
  • Motion tracking
  • 3-D image reconstruction

Day-4: Session-4: Big Data applications in NIH:

  • Emerging areas of Bio-informatics
  • Meta-genomics and Big Data mining issues
  • Big Data Predictive analytic for Pharmacogenomics, Metabolomics and Proteomics
  • Big Data in downstream Genomics process
  • Application of Big data predictive analytics in Public health

Big Data Dashboard for quick accessibility of diverse data and display :

  • Integration of existing application platform with Big Data Dashboard
  • Big Data management
  • Case Study of Big Data Dashboard: Tableau and Pentaho
  • Use Big Data app to push location based services in Govt.
  • Tracking system and management

Day-5 : Session-1: How to justify Big Data BI implementation within an organization:

  • Defining ROI for Big Data implementation
  • Case studies for saving Analyst Time for collection and preparation of Data –increase in productivity gain
  • Case studies of revenue gain from saving the licensed database cost
  • Revenue gain from location based services
  • Saving from fraud prevention
  • An integrated spreadsheet approach to calculate approx. expense vs. Revenue gain/savings from Big Data implementation.

Day-5 : Session-2: Step by Step procedure to replace legacy data system to Big Data System:

  • Understanding practical Big Data Migration Roadmap
  • What are the important information needed before architecting a Big Data implementation
  • What are the different ways of calculating volume, velocity, variety and veracity of data
  • How to estimate data growth
  • Case studies

Day-5: Session 4: Review of Big Data Vendors and review of their products. Q/A session:

  • Accenture
  • APTEAN (Formerly CDC Software)
  • Cisco Systems
  • Cloudera
  • Dell
  • EMC
  • GoodData Corporation
  • Guavus
  • Hitachi Data Systems
  • Hortonworks
  • HP
  • IBM
  • Informatica
  • Intel
  • Jaspersoft
  • Microsoft
  • MongoDB (Formerly 10Gen)
  • MU Sigma
  • Netapp
  • Opera Solutions
  • Oracle
  • Pentaho
  • Platfora
  • Qliktech
  • Quantum
  • Rackspace
  • Revolution Analytics
  • Salesforce
  • SAP
  • SAS Institute
  • Sisense
  • Software AG/Terracotta
  • Soft10 Automation
  • Splunk
  • Sqrrl
  • Supermicro
  • Tableau Software
  • Teradata
  • Think Big Analytics
  • Tidemark Systems
  • Treeminer
  • VMware (Part of EMC)

Big Data Business Intelligence for Telecom and Communication Service Providers Training Course

Duration

35 hours (usually 5 days including breaks)

Requirements

  • Should have basic knowledge of business operation and data systems in Telecom in their domain
  • Must have basic understanding of SQL/Oracle or relational database
  • Basic understanding of Statistics (in Excel levels)

Overview

Overview

Communications service providers (CSP) are facing pressure to reduce costs and maximize average revenue per user (ARPU), while ensuring an excellent customer experience, but data volumes keep growing. Global mobile data traffic will grow at a compound annual growth rate (CAGR) of 78 percent to 2016, reaching 10.8 exabytes per month.

Meanwhile, CSPs are generating large volumes of data, including call detail records (CDR), network data and customer data. Companies that fully exploit this data gain a competitive edge. According to a recent survey by The Economist Intelligence Unit, companies that use data-directed decision-making enjoy a 5-6% boost in productivity. Yet 53% of companies leverage only half of their valuable data, and one-fourth of respondents noted that vast quantities of useful data go untapped. The data volumes are so high that manual analysis is impossible, and most legacy software systems can’t keep up, resulting in valuable data being discarded or ignored.

With Big Data & Analytics’ high-speed, scalable big data software, CSPs can mine all their data for better decision making in less time. Different Big Data products and techniques provide an end-to-end software platform for collecting, preparing, analyzing and presenting insights from big data. Application areas include network performance monitoring, fraud detection, customer churn detection and credit risk analysis. Big Data & Analytics products scale to handle terabytes of data but implementation of such tools need new kind of cloud based database system like Hadoop or massive scale parallel computing processor ( KPU etc.)

This course work on Big Data BI for Telco covers all the emerging new areas in which CSPs are investing for productivity gain and opening up new business revenue stream. The course will provide a complete 360 degree over view of Big Data BI in Telco so that decision makers and managers can have a very wide and comprehensive overview of possibilities of Big Data BI in Telco for productivity and revenue gain.

Course objectives

Main objective of the course is to introduce new Big Data business intelligence techniques in 4 sectors of Telecom Business (Marketing/Sales, Network Operation, Financial operation and Customer Relation Management). Students will be introduced to following:

  • Introduction to Big Data-what is 4Vs (volume, velocity, variety and veracity) in Big Data- Generation, extraction and management from Telco perspective
  • How Big Data analytic differs from legacy data analytic
  • In-house justification of Big Data -Telco perspective
  • Introduction to Hadoop Ecosystem- familiarity with all Hadoop tools like Hive, Pig, SPARC –when and how they are used to solve Big Data problem
  • How Big Data is extracted to analyze for analytics tool-how Business Analysis’s can reduce their pain points of collection and analysis of data through integrated Hadoop dashboard approach
  • Basic introduction of Insight analytics, visualization analytics and predictive analytics for Telco
  • Customer Churn analytic and Big Data-how Big Data analytic can reduce customer churn and customer dissatisfaction in Telco-case studies
  • Network failure and service failure analytics from Network meta-data and IPDR
  • Financial analysis-fraud, wastage and ROI estimation from sales and operational data
  • Customer acquisition problem-Target marketing, customer segmentation and cross-sale from sales data
  • Introduction and summary of all Big Data analytic products and where they fit into Telco analytic space
  • Conclusion-how to take step-by-step approach to introduce Big Data Business Intelligence in your organization

Target Audience

  • Network operation, Financial Managers, CRM managers and top IT managers in Telco CIO office.
  • Business Analysts in Telco
  • CFO office managers/analysts
  • Operational managers
  • QA managers

Course Outline

Breakdown of topics on daily basis: (Each session is 2 hours)

Day-1: Session -1: Business Overview of Why Big Data Business Intelligence in Telco.

  • Case Studies from T-Mobile, Verizon etc.
  • Big Data adaptation rate in North American Telco & and how they are aligning their future business model and operation around Big Data BI
  • Broad Scale Application Area
  • Network and Service management
  • Customer Churn Management
  • Data Integration & Dashboard visualization
  • Fraud management
  • Business Rule generation
  • Customer profiling
  • Localized Ad pushing

Day-1: Session-2 : Introduction of Big Data-1

  • Main characteristics of Big Data-volume, variety, velocity and veracity. MPP architecture for volume.
  • Data Warehouses – static schema, slowly evolving dataset
  • MPP Databases like Greenplum, Exadata, Teradata, Netezza, Vertica etc.
  • Hadoop Based Solutions – no conditions on structure of dataset.
  • Typical pattern : HDFS, MapReduce (crunch), retrieve from HDFS
  • Batch- suited for analytical/non-interactive
  • Volume : CEP streaming data
  • Typical choices – CEP products (e.g. Infostreams, Apama, MarkLogic etc)
  • Less production ready – Storm/S4
  • NoSQL Databases – (columnar and key-value): Best suited as analytical adjunct to data warehouse/database

Day-1 : Session -3 : Introduction to Big Data-2

NoSQL solutions

  • KV Store – Keyspace, Flare, SchemaFree, RAMCloud, Oracle NoSQL Database (OnDB)
  • KV Store – Dynamo, Voldemort, Dynomite, SubRecord, Mo8onDb, DovetailDB
  • KV Store (Hierarchical) – GT.m, Cache
  • KV Store (Ordered) – TokyoTyrant, Lightcloud, NMDB, Luxio, MemcacheDB, Actord
  • KV Cache – Memcached, Repcached, Coherence, Infinispan, EXtremeScale, JBossCache, Velocity, Terracoqua
  • Tuple Store – Gigaspaces, Coord, Apache River
  • Object Database – ZopeDB, DB40, Shoal
  • Document Store – CouchDB, Cloudant, Couchbase, MongoDB, Jackrabbit, XML-Databases, ThruDB, CloudKit, Prsevere, Riak-Basho, Scalaris
  • Wide Columnar Store – BigTable, HBase, Apache Cassandra, Hypertable, KAI, OpenNeptune, Qbase, KDI

Varieties of Data: Introduction to Data Cleaning issue in Big Data

  • RDBMS – static structure/schema, doesn’t promote agile, exploratory environment.
  • NoSQL – semi structured, enough structure to store data without exact schema before storing data
  • Data cleaning issues

Day-1 : Session-4 : Big Data Introduction-3 : Hadoop

  • When to select Hadoop?
  • STRUCTURED – Enterprise data warehouses/databases can store massive data (at a cost) but impose structure (not good for active exploration)
  • SEMI STRUCTURED data – tough to do with traditional solutions (DW/DB)
  • Warehousing data = HUGE effort and static even after implementation
  • For variety & volume of data, crunched on commodity hardware – HADOOP
  • Commodity H/W needed to create a Hadoop Cluster

Introduction to Map Reduce /HDFS

  • MapReduce – distribute computing over multiple servers
  • HDFS – make data available locally for the computing process (with redundancy)
  • Data – can be unstructured/schema-less (unlike RDBMS)
  • Developer responsibility to make sense of data
  • Programming MapReduce = working with Java (pros/cons), manually loading data into HDFS

Day-2: Session-1.1: Spark : In Memory distributed database

  • What is “In memory” processing?
  • Spark SQL
  • Spark SDK
  • Spark API
  • RDD
  • Spark Lib
  • Hanna
  • How to migrate an existing Hadoop system to Spark

Day-2 Session -1.2: Storm -Real time processing in Big Data

  • Streams
  • Sprouts
  • Bolts
  • Topologies

Day-2: Session-2: Big Data Management System

  • Moving parts, compute nodes start/fail :ZooKeeper – For configuration/coordination/naming services
  • Complex pipeline/workflow: Oozie – manage workflow, dependencies, daisy chain
  • Deploy, configure, cluster management, upgrade etc (sys admin) :Ambari
  • In Cloud : Whirr
  • Evolving Big Data platform tools for tracking
  • ETL layer application issues

Day-2: Session-3: Predictive analytics in Business Intelligence -1: Fundamental Techniques & Machine learning based BI :

  • Introduction to Machine learning
  • Learning classification techniques
  • Bayesian Prediction-preparing training file
  • Markov random field
  • Supervised and unsupervised learning
  • Feature extraction
  • Support Vector Machine
  • Neural Network
  • Reinforcement learning
  • Big Data large variable problem -Random forest (RF)
  • Representation learning
  • Deep learning
  • Big Data Automation problem – Multi-model ensemble RF
  • Automation through Soft10-M
  • LDA and topic modeling
  • Agile learning
  • Agent based learning- Example from Telco operation
  • Distributed learning –Example from Telco operation
  • Introduction to Open source Tools for predictive analytics : R, Rapidminer, Mahut
  • More scalable Analytic-Apache Hama, Spark and CMU Graph lab

Day-2: Session-4 Predictive analytics eco-system-2: Common predictive analytic problems in Telecom

  • Insight analytic
  • Visualization analytic
  • Structured predictive analytic
  • Unstructured predictive analytic
  • Customer profiling
  • Recommendation Engine
  • Pattern detection
  • Rule/Scenario discovery –failure, fraud, optimization
  • Root cause discovery
  • Sentiment analysis
  • CRM analytic
  • Network analytic
  • Text Analytics
  • Technology assisted review
  • Fraud analytic
  • Real Time Analytic

Day-3 : Sesion-1 : Network Operation analytic- root cause analysis of network failures, service interruption from meta data, IPDR and CRM:

  • CPU Usage
  • Memory Usage
  • QoS Queue Usage
  • Device Temperature
  • Interface Error
  • IoS versions
  • Routing Events
  • Latency variations
  • Syslog analytics
  • Packet Loss
  • Load simulation
  • Topology inference
  • Performance Threshold
  • Device Traps
  • IPDR ( IP detailed record) collection and processing
  • Use of IPDR data for Subscriber Bandwidth consumption, Network interface utilization, modem status and diagnostic
  • HFC information

Day-3: Session-2: Tools for Network service failure analysis:

  • Network Summary Dashboard: monitor overall network deployments and track your organization’s key performance indicators
  • Peak Period Analysis Dashboard: understand the application and subscriber trends driving peak utilization, with location-specific granularity
  • Routing Efficiency Dashboard: control network costs and build business cases for capital projects with a complete understanding of interconnect and transit relationships
  • Real-Time Entertainment Dashboard: access metrics that matter, including video views, duration, and video quality of experience (QoE)
  • IPv6 Transition Dashboard: investigate the ongoing adoption of IPv6 on your network and gain insight into the applications and devices driving trends
  • Case-Study-1: The Alcatel-Lucent Big Network Analytics (BNA) Data Miner
  • Multi-dimensional mobile intelligence (m.IQ6)

Day-3 : Session 3: Big Data BI for Marketing/Sales –Understanding sales/marketing from Sales data: ( All of them will be shown with a live predictive analytic demo )

  • To identify highest velocity clients
  • To identify clients for a given products
  • To identify right set of products for a client ( Recommendation Engine)
  • Market segmentation technique
  • Cross-Sale and upsale technique
  • Client segmentation technique
  • Sales revenue forecasting technique

Day-3: Session 4: BI needed for Telco CFO office:

  • Overview of Business Analytics works needed in a CFO office
  • Risk analysis on new investment
  • Revenue, profit forecasting
  • New client acquisition forecasting
  • Loss forecasting
  • Fraud analytic on finances ( details next session )

Day-4 : Session-1: Fraud prevention BI from Big Data in Telco-Fraud analytic:

  • Bandwidth leakage / Bandwidth fraud
  • Vendor fraud/over charging for projects
  • Customer refund/claims frauds
  • Travel reimbursement frauds

Day-4 : Session-2: From Churning Prediction to Churn Prevention:

  • 3 Types of Churn : Active/Deliberate , Rotational/Incidental, Passive Involuntary
  • 3 classification of churned customers: Total, Hidden, Partial
  • Understanding CRM variables for churn
  • Customer behavior data collection
  • Customer perception data collection
  • Customer demographics data collection
  • Cleaning CRM Data
  • Unstructured CRM data ( customer call, tickets, emails) and their conversion to structured data for Churn analysis
  • Social Media CRM-new way to extract customer satisfaction index
  • Case Study-1 : T-Mobile USA: Churn Reduction by 50%

Day-4 : Session-3: How to use predictive analysis for root cause analysis of customer dis-satisfaction :

  • Case Study -1 : Linking dissatisfaction to issues – Accounting, Engineering failures like service interruption, poor bandwidth service
  • Case Study-2: Big Data QA dashboard to track customer satisfaction index from various parameters such as call escalations, criticality of issues, pending service interruption events etc.

Day-4: Session-4: Big Data Dashboard for quick accessibility of diverse data and display :

  • Integration of existing application platform with Big Data Dashboard
  • Big Data management
  • Case Study of Big Data Dashboard: Tableau and Pentaho
  • Use Big Data app to push location based Advertisement
  • Tracking system and management

Day-5 : Session-1: How to justify Big Data BI implementation within an organization:

  • Defining ROI for Big Data implementation
  • Case studies for saving Analyst Time for collection and preparation of Data –increase in productivity gain
  • Case studies of revenue gain from customer churn
  • Revenue gain from location based and other targeted Ad
  • An integrated spreadsheet approach to calculate approx. expense vs. Revenue gain/savings from Big Data implementation.

Day-5 : Session-2: Step by Step procedure to replace legacy data system to Big Data System:

  • Understanding practical Big Data Migration Roadmap
  • What are the important information needed before architecting a Big Data implementation
  • What are the different ways of calculating volume, velocity, variety and veracity of data
  • How to estimate data growth
  • Case studies in 2 Telco

Day-5: Session 3 & 4: Review of Big Data Vendors and review of their products. Q/A session:

  • AccentureAlcatel-Lucent
  • Amazon –A9
  • APTEAN (Formerly CDC Software)
  • Cisco Systems
  • Cloudera
  • Dell
  • EMC
  • GoodData Corporation
  • Guavus
  • Hitachi Data Systems
  • Hortonworks
  • Huawei
  • HP
  • IBM
  • Informatica
  • Intel
  • Jaspersoft
  • Microsoft
  • MongoDB (Formerly 10Gen)
  • MU Sigma
  • Netapp
  • Opera Solutions
  • Oracle
  • Pentaho
  • Platfora
  • Qliktech
  • Quantum
  • Rackspace
  • Revolution Analytics
  • Salesforce
  • SAP
  • SAS Institute
  • Sisense
  • Software AG/Terracotta
  • Soft10 Automation
  • Splunk
  • Sqrrl
  • Supermicro
  • Tableau Software
  • Teradata
  • Think Big Analytics
  • Tidemark Systems
  • VMware (Part of EMC)