Big Data Business Intelligence for Govt. Agencies - Plan Szkolenia

Primary tabs

Język szkolenia

To szkolenie jest realizowane w języku polskim lub angielskim.

Kod kursu

bdbiga

Czas trwania

40 godzin (zwykle 5 dni wliczając przerwy)

Wymagania

1. Should have basic knowledge of business operation and data systems in Govt. in their domain
2. Must have basic understanding of SQL/Oracle or relational database
3. Basic understanding of Statistics ( in Spreadsheet level)

Charakterystyka kursu

Advances in technologies and the increasing amount of information are transforming how business is conducted in many industries, including government. Government data generation and digital archiving rates are on the rise due to the rapid growth of mobile devices and applications, smart sensors and devices, cloud computing solutions, and citizen-facing portals. As digital information expands and becomes more complex, information management, processing, storage, security, and disposition become more complex as well. New capture, search, discovery, and analysis tools are helping organizations gain insights from their unstructured data. The government market is at a tipping point, realizing that information is a strategic asset, and government needs to protect, leverage, and analyze both structured and unstructured information to better serve and meet mission requirements. As government leaders strive to evolve data-driven organizations to successfully accomplish mission, they are laying the groundwork to correlate dependencies across events, people, processes, and information.

High-value government solutions will be created from a mashup of the most disruptive technologies:

  • Mobile devices and applications

  • Cloud services

  • Social business technologies and networking

  • Big Data and analytics

IDC predicts that by 2020, the IT industry will reach $5 trillion, approximately $1.7 trillion larger than today, and that 80% of the industry's growth will be driven by these 3rd Platform technologies. In the long term, these technologies will be key tools for dealing with the complexity of increased digital information. Big Data is one of the intelligent industry solutions and allows government to make better decisions by taking action based on patterns revealed by analyzing large volumes of data — related and unrelated, structured and unstructured.

But accomplishing these feats takes far more than simply accumulating massive quantities of data.“Making sense of these volumes of Big Data requires cutting-edge tools and technologies that can analyze and extract useful knowledge from vast and diverse streams of information,” Tom Kalil and Fen Zhao of the White House Office of Science and Technology Policy wrote in a post on the OSTP Blog.

The White House took a step toward helping agencies find these technologies when it established the National Big Data Research and Development Initiative in 2012. The initiative included more than $200 million to make the most of the explosion of Big Data and the tools needed to analyze it.

The challenges that Big Data poses are nearly as daunting as its promise is encouraging. Storing data efficiently is one of these challenges. As always, budgets are tight, so agencies must minimize the per-megabyte price of storage and keep the data within easy access so that users can get it when they want it and how they need it. Backing up massive quantities of data heightens the challenge.

Analyzing the data effectively is another major challenge. Many agencies employ commercial tools that enable them to sift through the mountains of data, spotting trends that can help them operate more efficiently. (A recent study by MeriTalk found that federal IT executives think Big Data could help agencies save more than $500 billio while also fulfilling mission objectives.).

Custom-developed Big Data tools also are allowing agencies to address the need to analyze their data. For example, the Oak Ridge National Laboratory’s Computational Data Analytics Group has made its Piranha data analytics system available to other agencies. The system has helped medical researchers find a link that can alert doctors to aortic aneurysms before they strike. It’s also used for more mundane tasks, such as sifting through résumés to connect job candidates with hiring managers.

Plan Szkolenia

Breakdown of topics on daily basis: (Each session is 2 hours)

  1. Day-1: Session -1: Business Overview of Why Big Data Business Intelligence in Govt.

  • Case Studies from NIH, DoE

  • Big Data adaptation rate in Govt. Agencies & and how they are aligning their future operation around Big Data Predictive Analytics

  • Broad Scale Application Area in DoD, NSA, IRS, USDA etc.

  • Interfacing Big Data with Legacy data

  • Basic understanding of enabling technologies in predictive analytics

  • Data Integration & Dashboard visualization

  • Fraud management

  • Business Rule/ Fraud detection generation

  • Threat detection and profiling

  • Cost benefit analysis for Big Data implementation

  1. Day-1: Session-2 : Introduction of Big Data-1

  • Main characteristics of Big Data-volume, variety, velocity and veracity. MPP architecture for volume.

  • Data Warehouses – static schema, slowly evolving dataset

  • MPP Databases like Greenplum, Exadata, Teradata, Netezza, Vertica etc.

  • Hadoop Based Solutions – no conditions on structure of dataset.

  • Typical pattern : HDFS, MapReduce (crunch), retrieve from HDFS

  • Batch- suited for analytical/non-interactive

  • Volume : CEP streaming data

  • Typical choices – CEP products (e.g. Infostreams, Apama, MarkLogic etc)

  • Less production ready – Storm/S4

  • NoSQL Databases – (columnar and key-value): Best suited as analytical adjunct to data warehouse/database

  1. Day-1 : Session -3 : Introduction to Big Data-2

NoSQL solutions

    • KV Store - Keyspace, Flare, SchemaFree, RAMCloud, Oracle NoSQL Database (OnDB)

    • KV Store - Dynamo, Voldemort, Dynomite, SubRecord, Mo8onDb, DovetailDB

    • KV Store (Hierarchical) - GT.m, Cache

    • KV Store (Ordered) - TokyoTyrant, Lightcloud, NMDB, Luxio, MemcacheDB, Actord

    • KV Cache - Memcached, Repcached, Coherence, Infinispan, EXtremeScale, JBossCache, Velocity, Terracoqua

    • Tuple Store - Gigaspaces, Coord, Apache River

    • Object Database - ZopeDB, DB40, Shoal

    • Document Store - CouchDB, Cloudant, Couchbase, MongoDB, Jackrabbit, XML-Databases, ThruDB, CloudKit, Prsevere, Riak-Basho, Scalaris

    • Wide Columnar Store - BigTable, HBase, Apache Cassandra, Hypertable, KAI, OpenNeptune, Qbase, KDI

Varieties of Data: Introduction to Data Cleaning issue in Big Data

    • RDBMS – static structure/schema, doesn’t promote agile, exploratory environment.

    • NoSQL – semi structured, enough structure to store data without exact schema before storing data

    • Data cleaning issues

  1. Day-1 : Session-4 : Big Data Introduction-3 : Hadoop

  • When to select Hadoop?

  • STRUCTURED - Enterprise data warehouses/databases can store massive data (at a cost) but impose structure (not good for active exploration)

  • SEMI STRUCTURED data – tough to do with traditional solutions (DW/DB)

  • Warehousing data = HUGE effort and static even after implementation

  • For variety & volume of data, crunched on commodity hardware – HADOOP

  • Commodity H/W needed to create a Hadoop Cluster

Introduction to Map Reduce /HDFS

  • MapReduce – distribute computing over multiple servers

  • HDFS – make data available locally for the computing process (with redundancy)

  • Data – can be unstructured/schema-less (unlike RDBMS)

  • Developer responsibility to make sense of data

  • Programming MapReduce = working with Java (pros/cons), manually loading data into HDFS

  1. Day-2: Session-1: Big Data Ecosystem-Building Big Data ETL: universe of Big Data Tools-which one to use and when?

  • Hadoop vs. Other NoSQL solutions

  • For interactive, random access to data

  • Hbase (column oriented database) on top of Hadoop

  • Random access to data but restrictions imposed (max 1 PB)

  • Not good for ad-hoc analytics, good for logging, counting, time-series

  • Sqoop - Import from databases to Hive or HDFS (JDBC/ODBC access)

  • Flume – Stream data (e.g. log data) into HDFS

  1. Day-2: Session-2: Big Data Management System

  • Moving parts, compute nodes start/fail :ZooKeeper - For configuration/coordination/naming services

  • Complex pipeline/workflow: Oozie – manage workflow, dependencies, daisy chain

  • Deploy, configure, cluster management, upgrade etc (sys admin) :Ambari

  • In Cloud : Whirr

  1. Day-2: Session-3: Predictive analytics in Business Intelligence -1: Fundamental Techniques & Machine learning based BI :

  • Introduction to Machine learning

  • Learning classification techniques

  • Bayesian Prediction-preparing training file

  • Support Vector Machine

  • KNN p-Tree Algebra & vertical mining

  • Neural Network

  • Big Data large variable problem -Random forest (RF)

  • Big Data Automation problem – Multi-model ensemble RF

  • Automation through Soft10-M

  • Text analytic tool-Treeminer

  • Agile learning

  • Agent based learning

  • Distributed learning

  • Introduction to Open source Tools for predictive analytics : R, Rapidminer, Mahut

  1. Day-2: Session-4 Predictive analytics eco-system-2: Common predictive analytic problems in Govt.

  • Insight analytic

  • Visualization analytic

  • Structured predictive analytic

  • Unstructured predictive analytic

  • Threat/fraudstar/vendor profiling

  • Recommendation Engine

  • Pattern detection

  • Rule/Scenario discovery –failure, fraud, optimization

  • Root cause discovery

  • Sentiment analysis

  • CRM analytic

  • Network analytic

  • Text Analytics

  • Technology assisted review

  • Fraud analytic

  • Real Time Analytic

  1. Day-3 : Sesion-1 : Real Time and Scalable Analytic Over Hadoop

  • Why common analytic algorithms fail in Hadoop/HDFS

  • Apache Hama- for Bulk Synchronous distributed computing

  • Apache SPARK- for cluster computing for real time analytic

  • CMU Graphics Lab2- Graph based asynchronous approach to distributed computing

  • KNN p-Algebra based approach from Treeminer for reduced hardware cost of operation

  1. Day-3: Session-2: Tools for eDiscovery and Forensics

  • eDiscovery over Big Data vs. Legacy data – a comparison of cost and performance

  • Predictive coding and technology assisted review (TAR)

  • Live demo of a Tar product ( vMiner) to understand how TAR works for faster discovery

  • Faster indexing through HDFS –velocity of data

  • NLP or Natural Language processing –various techniques and open source products

  • eDiscovery in foreign languages-technology for foreign language processing

  1. Day-3 : Session 3: Big Data BI for Cyber Security –Understanding whole 360 degree views of speedy data collection to threat identification

  • Understanding basics of security analytics-attack surface, security misconfiguration, host defenses

  • Network infrastructure/ Large datapipe / Response ETL for real time analytic

  • Prescriptive vs predictive – Fixed rule based vs auto-discovery of threat rules from Meta data

  1. Day-3: Session 4: Big Data in USDA : Application in Agriculture

  • Introduction to IoT ( Internet of Things) for agriculture-sensor based Big Data and control

  • Introduction to Satellite imaging and its application in agriculture

  • Integrating sensor and image data for fertility of soil, cultivation recommendation and forecasting

  • Agriculture insurance and Big Data

  • Crop Loss forecasting

  1. Day-4 : Session-1: Fraud prevention BI from Big Data in Govt-Fraud analytic:

  • Basic classification of Fraud analytics- rule based vs predictive analytics

  • Supervised vs unsupervised Machine learning for Fraud pattern detection

  • Vendor fraud/over charging for projects

  • Medicare and Medicaid fraud- fraud detection techniques for claim processing

  • Travel reimbursement frauds

  • IRS refund frauds

  • Case studies and live demo will be given wherever data is available.

  1. Day-4 : Session-2: Social Media Analytic- Intelligence gathering and analysis

  • Big Data ETL API for extracting social media data

  • Text, image, meta data and video

  • Sentiment analysis from social media feed

  • Contextual and non-contextual filtering of social media feed

  • Social Media Dashboard to integrate diverse social media

  • Automated profiling of social media profile

  • Live demo of each analytic will be given through Treeminer Tool.

  1. Day-4 : Session-3: Big Data Analytic in image processing and video feeds

  • Image Storage techniques in Big Data- Storage solution for data exceeding petabytes

  • LTFS and LTO

  • GPFS-LTFS ( Layered storage solution for Big image data)

  • Fundamental of image analytics

  • Object recognition

  • Image segmentation

  • Motion tracking

  • 3-D image reconstruction

  1. Day-4: Session-4: Big Data applications in NIH:

  • Emerging areas of Bio-informatics

  • Meta-genomics and Big Data mining issues

  • Big Data Predictive analytic for Pharmacogenomics, Metabolomics and Proteomics

  • Big Data in downstream Genomics process

  • Application of Big data predictive analytics in Public health

  1. Big Data Dashboard for quick accessibility of diverse data and display :

  • Integration of existing application platform with Big Data Dashboard

  • Big Data management

  • Case Study of Big Data Dashboard: Tableau and Pentaho

  • Use Big Data app to push location based services in Govt.

  • Tracking system and management

  1. Day-5 : Session-1: How to justify Big Data BI implementation within an organization:

  • Defining ROI for Big Data implementation

  • Case studies for saving Analyst Time for collection and preparation of Data –increase in productivity gain

  • Case studies of revenue gain from saving the licensed database cost

  • Revenue gain from location based services

  • Saving from fraud prevention

  • An integrated spreadsheet approach to calculate approx. expense vs. Revenue gain/savings from Big Data implementation.

  1. Day-5 : Session-2: Step by Step procedure to replace legacy data system to Big Data System:

  • Understanding practical Big Data Migration Roadmap

  • What are the important information needed before architecting a Big Data implementation

  • What are the different ways of calculating volume, velocity, variety and veracity of data

  • How to estimate data growth

  • Case studies

  1. Day-5: Session 4: Review of Big Data Vendors and review of their products. Q/A session:

  • Accenture

  • APTEAN (Formerly CDC Software)

  • Cisco Systems

  • Cloudera

  • Dell

  • EMC

  • GoodData Corporation

  • Guavus

  • Hitachi Data Systems

  • Hortonworks

  • HP

  • IBM

  • Informatica

  • Intel

  • Jaspersoft

  • Microsoft

  • MongoDB (Formerly 10Gen)

  • MU Sigma

  • Netapp

  • Opera Solutions

  • Oracle

  • Pentaho

  • Platfora

  • Qliktech

  • Quantum

  • Rackspace

  • Revolution Analytics

  • Salesforce

  • SAP

  • SAS Institute

  • Sisense

  • Software AG/Terracotta

  • Soft10 Automation

  • Splunk

  • Sqrrl

  • Supermicro

  • Tableau Software

  • Teradata

  • Think Big Analytics

  • Tidemark Systems

  • Treeminer

  • VMware (Part of EMC) 


Szkolenie gwarantowane uruchamiamy nawet dla jednego uczestnika!
Szkolenie Otwarte Szkolenie Otwarte
W szkoleniu uczestniczą kursanci z różnych firm. Kurs realizowany jest wg planu szkolenia zamieszczonego na naszych stronach.
od 18000PLN
(84)
Szkolenie Zamknięte Szkolenie Zamknięte
Uczestnicy tylko z jednej organizacji. Nie ma możliwości dołączenia uczestników z zewnątrz. Program szkolenia jest zazwyczaj dostosowany do konkretnej grupy, tematy zajęć są uzgadniane pomiędzy klientem a trenerem.
od 18000PLN
Zapytaj o wycenę
Szkolenie Zdalne Szkolenie Zdalne
Instruktor oraz uczestnicy znajdują się w różnych fizycznych lokalizacjach i komunikują się przez Internet.
od 34000PLN
Zapytaj o wycenę
SelfStudy SelfStudy
Szkolenie bez zaangażowania trenera. Uczestnicy korzystają z nagranych materiałów wideo, testów oraz innych treści w dogodnym dla siebie terminie.
Cena nie została jeszcze ustalona
Zgłoś zainteresowanie

Im więcej zgłaszasz uczestników, tym większe oszczędności. Tabela przedstawia cenę za uczestnika w zależności od liczby zgłaszanych osób i służy jedynie to zilustrowania przykładowych cen. Aktualna oferta dotycząca szkolenie może być inna.

Liczba uczestników Szkolenie Otwarte Szkolenie Zamknięte Szkolenie Zdalne
1 18000PLN 18000PLN 34000PLN
2 10100PLN 10000PLN 18000PLN
3 7467PLN 7333PLN 12667PLN
4 6150PLN 6000PLN 10000PLN
Nie znalazłeś pasującego terminu? Zaproponuj termin szkolenia >>
Zbyt drogo? Podaj swoją cenę

Powiązane Kategorie


Kursy ze Zniżką

Szkolenie Miejscowość Data Kursu Cena szkolenia [Zdalne/Stacjonarne]
Programowanie w C# 5.0 z Visual Studio 2012 Poznan, Garbary pon., 2016-05-30 09:00 2685PLN / 1822PLN
Introduction to Machine Learning Wrocław, ul.Ludwika Rydygiera 2a/22 pon., 2016-05-30 09:00 2730PLN / 1940PLN
Adobe Captivate Kielce wt., 2016-05-31 09:00 1318PLN / 1127PLN
Excel i VBA dla kontrolerów finansowych i audytorów Szczecin wt., 2016-05-31 09:00 1913PLN / 1513PLN
Programowanie w WPF 4.5 Warszawa, ul. Złota 3/11 wt., 2016-05-31 09:00 2359PLN / 1355PLN
SQL Fundamentals Warszawa, ul. Złota 3/11 śr., 2016-06-01 09:00 1358PLN / 853PLN
MS Excel - poziom średniozaawansowany Łódź, ul. Tatrzańska 11 śr., 2016-06-01 09:00 1044PLN / 840PLN
Język SQL w bazie danych MSSQL Toruń, ul. Żeglarska 10/14 śr., 2016-06-01 09:00 1568PLN / 1198PLN
Bezpieczeństwo aplikacji internetowych Katowice śr., 2016-06-01 09:30 3606PLN / 2531PLN
Automatyzacja testów za pomocą Selenium Kraków pon., 2016-06-06 09:00 3200PLN / 2433PLN
Automatyzacja testów za pomocą Selenium Katowice wt., 2016-06-07 09:30 3431PLN / 2469PLN
MS Excel - poziom średniozaawansowany Katowice śr., 2016-06-08 09:00 700PLN / 771PLN
Programowanie w języku C++ Olsztyn, ul. Kajki 3/1 pon., 2016-06-13 09:00 2936PLN / 2395PLN
Docker - zarządzanie kontenerami Trójmiasto wt., 2016-06-14 09:00 4360PLN / 2774PLN
Excel dla zaawansowanych Katowice pon., 2016-06-20 09:00 775PLN / 933PLN
Automatyzacja testów za pomocą Selenium Warszawa, ul. Złota 3/11 pon., 2016-06-20 09:00 3431PLN / 2327PLN
PostgreSQL Administration Lublin pon., 2016-06-20 09:30 4025PLN / 3134PLN
Wprowadzenie do R Warszawa, ul. Złota 3/11 wt., 2016-06-21 09:00 3058PLN / 2123PLN
Tworzenie i zarządzanie stronami WWW Wrocław, ul.Ludwika Rydygiera 2a/22 pon., 2016-06-27 09:00 3410PLN / 2555PLN
Programowanie w języku C Gdynia pon., 2016-06-27 09:00 1590PLN / 1143PLN
Distributed Messaging with Apache Kafka Katowice pon., 2016-06-27 09:30 4998PLN / 3288PLN
Wzorce projektowe w C# Wrocław, ul.Ludwika Rydygiera 2a/22 śr., 2016-06-29 09:00 1865PLN / 1392PLN
Visual Basic for Applications (VBA) w Excel dla analityków Poznan, Garbary pon., 2016-07-04 09:00 1912PLN / 1278PLN
Debian Administration Poznan, Garbary pon., 2016-07-04 09:00 3157PLN / 2083PLN
Wdrażanie efektywnych strategii cenowych Poznan, Garbary śr., 2016-07-06 09:00 1427PLN / 1093PLN
Excel i VBA dla kontrolerów finansowych i audytorów Warszawa, ul. Złota 3/11 pon., 2016-07-11 09:00 1913PLN / 1441PLN
Machine Learning Fundamentals with R Warszawa, ul. Złota 3/11 pon., 2016-07-18 09:00 2523PLN / 1828PLN
Building Web Apps using the MEAN stack Szczecin pon., 2016-07-18 09:00 5538PLN / 3351PLN
Microsoft Access - pobieranie danych Poznan, Garbary śr., 2016-07-20 09:00 1117PLN / 856PLN
Programowanie w języku Python Warszawa, ul. Złota 3/11 pon., 2016-08-01 09:00 5790PLN / 3753PLN
Programowanie w WPF 4.5 Warszawa, ul. Złota 3/11 pon., 2016-09-05 09:00 2359PLN / 1355PLN
BPMN 2.0 dla Analityków Biznesowych Wrocław, ul.Ludwika Rydygiera 2a/22 wt., 2016-09-27 09:00 3110PLN / 2337PLN

Najbliższe szkolenia

MiejscowośćData KursuCena szkolenia [Zdalne/Stacjonarne]
Szczecinpon., 2016-06-13 09:0010000PLN / 6119PLN
Olsztyn, ul. Kajki 3/1pon., 2016-06-13 09:0010000PLN / 6119PLN
Krakówpon., 2016-06-13 09:0010000PLN / 6325PLN
Zielona Góra, ul. Reja 6pon., 2016-06-13 09:0010000PLN / 6106PLN
Wrocław, ul.Ludwika Rydygiera 2a/22pon., 2016-06-13 09:0010000PLN / 6038PLN

Some of our clients