Business Intelligence Training Courses

Business Intelligence Training

Business Intelligence (BI) training

Client Testimonials

Power BI

Exercises, additional tips from the trainer, flexibility to add more insights when requested

Anna Alechno - British American Shared Services Europe BAT GBS Finance, WER/Centre/EEMEA

Power BI

Power BI possibility

British American Shared Services Europe

Power BI

Trener był w stałym kontakcie z odbiorcami szkolenia, dostosowywał tempo do naszych potrzeb i na bieżąco rozwiązywał nasze problemy i odpowiadał na nasze pytania

Dominika Jaworska - UNILEVER POLAND SERVICES SP. Z O.O.

Pentaho Business Intelligence (PBI) - moduły raportowe

wiedza trenera, dostępny sprzęt

Mariusz Moskal - Orange Szkolenia sp. z o.o.

Power BI

power bi web- Q&A

Cristina Palii - British American Shared Services Europe

Power BI

Offering a more in-depth scope about Power BI more than any training institute that i came across to.

Mohammed Al Ameer - BMMI

Power BI

Dobrze przygotowane, przemyslane cwiczenia oraz wiedza prowadzacego

Sandra Przewdzing - Unilever Poland Services sp. z o.o.

Power BI

Szkolenie było bardzo spójne, dowiedziałam się o wielu ciekawych funkcjonalnościach programu.

Anna Klimczak - Unilever Poland Services sp. z o.o.

Power BI

Trener posiada bardzo szeroką wiedzę, potrafi ją przekazać. Angażuje uczestników do interakcji i wspólnego rozwiązywania napotkanych problemów.

Nina Rokita - UNILEVER POLAND SERVICES SP. Z O.O.

Power BI

Fakt, że dostaliśmy gotowe rozwiązania na zadane problemy + pokazanie szerokiego spektrum zastosowań + przekazane kompendium wiedzy.

Wojciech Michalak - UNILEVER POLAND SERVICES SP. Z O.O.

Pentaho Business Intelligence (PBI) - moduły raportowe

Wykorzystanie data-integration

Waldemar Wisniewski - Orange Szkolenia sp. z o.o.

Pentaho Business Intelligence (PBI) - moduły raportowe

Przykłady wzbogacane danymi rzeczywistymi i scenariuszami przydatnymi w codziennej praktyce

Krzysztof Świątczak - Orange Szkolenia sp. z o.o.

Subcategories

Business Intelligence Course Outlines

Code Name Duration Overview
dashbuilder Dashbuilder for business users 14 hours Dashbuilder is an open-source web application for visually creating business dashboards and reports. In this instructor-led, live training, participants will learn how to create business dashboards and reports using Dashbuilder. By the end of this training, participants will be able to: Visual configure and personalize dashboards using drag-and-drop Create different types of visualizations using charting libraries Define interactive report tables Create and edit inline KPIs (Key Performance Indicators) Customize the look and feel of metric displayers Audience Managers Analysts Format of the course Part lecture, part discussion, exercises and heavy hands-on practice To request a customized course outline for this training, please contact us.  
dashbuilderforengineers Dashbuilder for engineers 7 hours Dashbuilder is an open-source web application for visually creating business dashboards and reports. In this instructor-led, live training, participants will learn set up, configure, integrate and deploy Dashbuilder. By the end of this training, participants will be able to: Extract data from heterogeneous sources such as JDBC databases and text files Use connectors to connect to third-party systems and platforms such as jBPM     Configure roles, permissions and access controls for users Deploy Dashbuilder to a live production environment Audience Developers IT and system architects Format of the course Part lecture, part discussion, exercises and heavy hands-on practice To request a customized course outline for this training, please contact us.  
magellan Magellan: Geospatial Analytics with on Spark 14 hours Magellan is an open-source distributed execution engine for geospatial analytics on big data. Implemented on top of Apache Spark, it extends Spark SQL and provides a relational abstraction for geospatial analytics. This instructor-led, live training introduces the concepts and approaches for implementing geospacial analytics and walks participants through the creation of a predictive analysis application using Magellan on Spark. By the end of this training, participants will be able to: Efficiently query, parse and join geospatial datasets at scale Implement geospatial data in business intelligence and predictive analytics applications Use spatial context to extend the capabilities of mobile devices, sensors, logs, and wearables Audience Application developers Format of the course Part lecture, part discussion, exercises and heavy hands-on practice To request a customized course outline for this training, please contact us.  
mdlmrah Model MapReduce and Apache Hadoop 14 hours The course is intended for IT specialist that works with the distributed processing of large data sets across clusters of computers. Data Mining and Business Intelligence Introduction Area of application Capabilities Basics of data exploration Big data What does Big data stand for? Big data and Data mining MapReduce Model basics Example application Stats Cluster model Hadoop What is Hadoop Installation Configuration Cluster settings Architecture and configuration of Hadoop Distributed File System Console tools DistCp tool MapReduce and Hadoop Streaming Administration and configuration of Hadoop On Demand Alternatives
zeppelin Zeppelin for interactive data analytics 14 hours Apache Zeppelin is a web-based notebook for capturing, exploring, visualizing and sharing Hadoop and Spark based data. This instructor-led, live training introduces the concepts behind interactive data analytics and walks participants through the deployment and usage of Zeppelin in a single-user or multi-user environment. By the end of this training, participants will be able to: Install and configure Zeppelin Develop, organize, execute and share data in a browser-based interface Visualize results without referring to the command line or cluster details Execute and collaborate on long workflows Work with any of a number of plug-in language/data-processing-backends, such as Scala ( with Apache Spark ), Python ( with Apache Spark ), Spark SQL, JDBC, Markdown and Shell. Integrate Zeppelin with Spark, Flink and Map Reduce Secure multi-user instances of Zeppelin with Apache Shiro Audience Data engineers Data analysts Data scientists Software developers Format of the course Part lecture, part discussion, exercises and heavy hands-on practice To request a customized course outline for this training, please contact us.  
3119 Business Intelligence in MS SQL Server 2008 14 hours Training is dedicated to the basics of create a data warehouse environment based on MS SQL Server 2008. Course participant gain the basis for the design and construction of a data warehouse that runs on MS SQL Server 2008. Gain knowledge of how to build a simple ETL process based on the SSIS and then design and implement a data cube using SSAS. The participant will be able to manage OLAP database: create and delete database OLAP Processing a partition changes on-line. The participant will acquire knowledge of scripting XML / A and MDX. basis, objectives and application of data warehouse, data warehouse server types base building ETL processes in SSIS basic design data cubes in an Analysis Services: measure group measure dimensions, hierarchies, attributes, development of the project data cubes: measures calculated, partitions, perspectives, translations, actions, KPIs, Build and deploy, processing a partition the base XML / A: Partitioning, processes and overall Incremental, delete partitions, processes of aggregation, base MDX language
datameer Datameer for Data Analysts 14 hours Datameer is a business intelligence and analytics platform built on Hadoop. It allows end-users to access, explore and correlate large-scale, structured, semi-structured and unstructured data in an easy-to-use fashion. In this instructor-led, live training, participants will learn how to use Datameer to overcome Hadoop's steep learning curve as they step through the setup and analysis of a series of big data sources. By the end of this training, participants will be able to: Create, curate, and interactively explore an enterprise data lake Access business intelligence data warehouses, transactional databases and other analytic stores Use a spreadsheet user-interface to design end-to-end data processing pipelines Access pre-built functions to explore complex data relationships Use drag-and-drop wizards to visualize data and create dashboards Use tables, charts, graphs, and maps to analyze query results Audience Data analysts Format of the course Part lecture, part discussion, exercises and heavy hands-on practice To request a customized course outline for this training, please contact us.
6550 Business Intelligence w MS SQL Server 2012 14 hours
deckgl deck.gl: Visualizing Large-scale Geospatial Data 14 hours deck.gl is an open-source, WebGL-powered library for exploring and visualizing data assets at scale. Created by Uber, it is especially useful for gaining insights from geospatial data sources, such as data on maps. This instructor-led, live training introduces the concepts and functionality behind deck.gl and walks participants through the set up of a demonstration project. By the end of this training, participants will be able to: Take data from very large collections and turn it into compelling visual representations Visualize data collected from transportation and journey-related use cases, such as pick-up and drop-off experiences, network traffic, etc. Apply layering techniques to geospatial data to depict changes in data over time Integrate deck.gl with React (for Reactive programming) and Mapbox GL (for visualizations on Mapbox based maps). Understand and explore other use cases for deck.gl, including visualizing points collected from a 3D indoor scan, visualizing machine learning models in order to optimize their algorithms, etc. Audience Developers Data scientists Format of the course Part lecture, part discussion, exercises and heavy hands-on practice To request a customized course outline for this training, please contact us.
68780 Apache Spark 14 hours Why Spark? Problems with Traditional Large-Scale Systems Introducing Spark Spark Basics What is Apache Spark? Using the Spark Shell Resilient Distributed Datasets (RDDs) Functional Programming with Spark Working with RDDs RDD Operations Key-Value Pair RDDs MapReduce and Pair RDD Operations The Hadoop Distributed File System Why HDFS? HDFS Architecture Using HDFS Running Spark on a Cluster Overview A Spark Standalone Cluster The Spark Standalone Web UI Parallel Programming with Spark RDD Partitions and HDFS Data Locality Working With Partitions Executing Parallel Operations Caching and Persistence RDD Lineage Caching Overview Distributed Persistence Writing Spark Applications Spark Applications vs. Spark Shell Creating the SparkContext Configuring Spark Properties Building and Running a Spark Application Logging Spark, Hadoop, and the Enterprise Data Center Overview Spark and the Hadoop Ecosystem Spark and MapReduce Spark Streaming Spark Streaming Overview Example: Streaming Word Count Other Streaming Operations Sliding Window Operations Developing Spark Streaming Applications Common Spark Algorithms Iterative Algorithms Graph Analysis Machine Learning Improving Spark Performance Shared Variables: Broadcast Variables Shared Variables: Accumulators Common Performance Issues
fsharpfordatascience F# for Data Science 21 hours Data science is the application of statistical analysis, machine learning, data visualization and programming for the purpose of understanding and interpreting real-world data. F# is a well suited programming language for data science as it combines efficient execution, REPL-scripting, powerful libraries and scalable data integration. In this instructor-led, live training, participants will learn how to use F# to solve a series of real-world data science problems. By the end of this training, participants will be able to: Use F#'s integrated data science packages Use F# to interoperate with other languages and platforms, including Excel, R, Matlab, and Python Use the Deedle package to solve time series problems Carry out advanced analysis with minimal lines of production-quality code Understand how functional programming is a natural fit for scientific and big data computations Access and visualize data with F# Apply F# for machine learning Explore solutions for problems in domains such as business intelligence and social gaming Audience Developers Data scientists Format of the course Part lecture, part discussion, exercises and heavy hands-on practice To request a customized course outline for this training, please contact us.
wdneo4j Wprowadzenie do Neo4j - grafowej bazy danych 7 hours
powerbiforbiandanalytics Power BI for Business Analysts 21 hours Microsoft Power BI is a free Software as a Service (SaaS) suite for analyzing data and sharing insights. Power BI dashboards provide a 360-degree view of the most important metrics in one place, updated in real time, and available on all of their devices. In this instructor-led, live training, participants will learn how to use Microsoft Power Bi to analyze and visualize data using a series of sample data sets. By the end of this training, participants will be able to: Create visually compelling dashboards that provide valuable insights into data Obtain and integrate data from multiple data sources Build and share visualizations with team members Adjust data with Power BI Desktop Audience Business managers Business analystss Data analysts Business Intelligence (BI) and Data Warehouse (DW) teams Report developers Format of the course Part lecture, part discussion, exercises and heavy hands-on practice   Introduction Data Visualization Authoring in Power BI Desktop Creating reports Interacting with reports Uploading reports it to the Power BI Service Revising report layouts Publishing to PowerBI.com Sharing and collaborating with team members Data Modeling Aquiring data Modeling data Security Working with DAX Refreshing the source data Securing data Advanced querying and data modeling Data modeling principals Complex DAX patterns Power BI tips and tricks Closing remarks
powerbi Power BI 14 hours Power BI is a business analytics service created by Microsoft. Power BI Architecture Data sources On-premises and online data sources Data transformations + M language Direct connections to selected sources (SQL Server, OLAP) Modeling Relationship between tables (single and multidirectional data filtering) DAX - templates and best practices Introduction to DAX Most commonly used functions and context of calculations Working with the time dimension (including fiscal periods, comparing periods, YTD) Hierarchie parent-child Filtering data relative to hierarchy Popular DAX templates Visualizations Interactive data analysis Select the appropriate visualization Filters, grouping, exclusions Visualization on maps Visualizations using the R language Visualization enhancements (so-called custom visuals) Data Access Management - Row-Level Security Team work and mobile with Power BI Dashboards and reports Q & A mechanism Workspaces Mobile applications    
powerbifordev Power BI for Developers 28 hours Microsoft Power BI is a free Software as a Service (SaaS) suite for analyzing data and sharing insights. Power BI dashboards provide a 360-degree view of the most important metrics in one place, updated in real time, and available on all of their devices. In this instructor-led, live training, participants will learn how to use Power BI to develop custom software solutions for the Power BI and Azure platforms. By the end of this training, participants will be able to: Configure real-time dashboards Create custom visualizations Integrate rich analytics into existing applications Embed interactive reports and visuals into existing applications Access data from within an application Master Power BI Portal, Desktop, Embedded and Rest API Integrate R Scripts into Power BI Desktop Audience Developers Architects Format of the course Part lecture, part discussion, exercises and heavy hands-on practice Introduction The data workflow: data source, ETL (Extract, Transform, Load), data warehousing and data analysis Overview of Power BI Desktop Power BI Developer Tools Programming with TypeScript and d3.js Developing and Distributing Custom Visuals Developing R Scripts using RStudio Integrating R Scripts into Power BI Desktop Developing Custom R Visuals Developing with the Power BI REST API Updating the Power BI Dashboard in Real-time Embedding Dashboard and Reports into an Application with Power BI Embedded Closing remarks
pdi2 Pentaho Data Integration (PDI) - moduł do przetwarzania danych ETL (poziom średniozaawansowany) 14 hours Pentaho jest produktem dystrybuowanym na zasadzie licencji Open Source, który dostarcza pełnej gamy rozwiązań dla biznesu w obszarze Business Intelligence, włączając w to raportowanie, analizy danych, kokpity managerskie i integrację danych.  Dzięki platformie Pentaho poszczególne komórki biznesu uzyskują dostęp do szerokiego wachlarza cennych informacji, począwszy od analiz sprzedaży i opłacalności poszczególnych klientów czy produktów, poprzez raportowanie na potrzeby HR i działów finansowych, aż do dostarczania informacji zbiorczych na potrzeby kierownictwa wyższego szczebla. Szkolenie jest adresowane do programistów, architektów oraz administratorów aplikacji, którzy chcą tworzyć lub utrzymywać procesy ekstrakcji, transformacji i ładowania danych (ETL) z wykorzystaniem Pentaho Data Integration (PDI). Po szkoleniu uczestnik nabędzie umiejętności związane z: instalacją i konfiguracją środowiska Pentaho, projektowaniem, implementowaniem, monitorowaniem, uruchamianiem i strojeniem procesów ETL, pracy z danymi w PDI, wprowadzaniem różnych typów danych oraz różnych formatów danych filtrowaniem, grupowaniem oraz łączeniem danych harmonogramowaniem zadań, uruchamianiem transformacji. Kurs ma zadanie przeprowadzić uczestnika od poziomu podstawowego do średniozaawansowanego. Dzień pierwszy Instalacja i konfiguracja Pentaho Data Integration Utworzenie repozytorium Zapoznanie się z interfejsem użytkownika Spoon Tworzenie transformacji Odczyt i zapis do pliku Praca z bazami danych (generator zapytań SQL) Filtrowanie, grupowanie oraz łączenie danych Praca z XLS Tworzenie zadań Definiowanie parametrów i zmiennych Dzień DRUGI Wersjonowanie danych (obsługa okresów obowiązywania) Transakcyjność bazodanowa w transformacjach Wykorzystanie JavaScript Transformacje mapujące Konwersja typów danych oraz kolejność kolumn w strumieniu Logowanie przetwarzanie Uruchamianie transformacji i zadań z linii poleceń (kitchen.bat, pan.bat) Harmonogramowanie zadań Uruchamianie transformacji równolegle
bigdatabicriminal Big Data Business Intelligence for Criminal Intelligence Analysis 35 hours Advances in technologies and the increasing amount of information are transforming how law enforcement is conducted. The challenges that Big Data pose are nearly as daunting as Big Data's promise. Storing data efficiently is one of these challenges; effectively analyzing it is another. In this instructor-led, live training, participants will learn the mindset with which to approach Big Data technologies, assess their impact on existing processes and policies, and implement these technologies for the purpose of identifying criminal activity and preventing crime. Case studies from law enforcement organizations around the world will be examined to gain insights on their adoption approaches, challenges and results. By the end of this training, participants will be able to: Combine Big Data technology with traditional data gathering processes to piece together a story during an investigation Implement industrial big data storage and processing solutions for data analysis Prepare a proposal for the adoption of the most adequate tools and processes for enabling a data-driven approach to criminal investigation Audience Law Enforcement specialists with a technical background Format of the course Part lecture, part discussion, exercises and heavy hands-on practice ===== Day 01 ===== Overview of Big Data Business Intelligence for Criminal Intelligence Analysis Case Studies from Law Enforcement - Predictive Policing Big Data adoption rate in Law Enforcement Agencies and how they are aligning their future operation around Big Data Predictive Analytics Emerging technology solutions such as gunshot sensors, surveillance video and social media Using Big Data technology to mitigate information overload Interfacing Big Data with Legacy data Basic understanding of enabling technologies in predictive analytics Data Integration & Dashboard visualization Fraud management Business Rules and Fraud detection Threat detection and profiling Cost benefit analysis for Big Data implementation Introduction to Big Data Main characteristics of Big Data -- Volume, Variety, Velocity and Veracity. MPP (Massively Parallel Processing) architecture Data Warehouses – static schema, slowly evolving dataset MPP Databases: Greenplum, Exadata, Teradata, Netezza, Vertica etc. Hadoop Based Solutions – no conditions on structure of dataset. Typical pattern : HDFS, MapReduce (crunch), retrieve from HDFS Apache Spark for stream processing Batch- suited for analytical/non-interactive Volume : CEP streaming data Typical choices – CEP products (e.g. Infostreams, Apama, MarkLogic etc) Less production ready – Storm/S4 NoSQL Databases – (columnar and key-value): Best suited as analytical adjunct to data warehouse/database NoSQL solutions KV Store - Keyspace, Flare, SchemaFree, RAMCloud, Oracle NoSQL Database (OnDB) KV Store - Dynamo, Voldemort, Dynomite, SubRecord, Mo8onDb, DovetailDB KV Store (Hierarchical) - GT.m, Cache KV Store (Ordered) - TokyoTyrant, Lightcloud, NMDB, Luxio, MemcacheDB, Actord KV Cache - Memcached, Repcached, Coherence, Infinispan, EXtremeScale, JBossCache, Velocity, Terracoqua Tuple Store - Gigaspaces, Coord, Apache River Object Database - ZopeDB, DB40, Shoal Document Store - CouchDB, Cloudant, Couchbase, MongoDB, Jackrabbit, XML-Databases, ThruDB, CloudKit, Prsevere, Riak-Basho, Scalaris Wide Columnar Store - BigTable, HBase, Apache Cassandra, Hypertable, KAI, OpenNeptune, Qbase, KDI Varieties of Data: Introduction to Data Cleaning issues in Big Data RDBMS – static structure/schema, does not promote agile, exploratory environment. NoSQL – semi structured, enough structure to store data without exact schema before storing data Data cleaning issues Hadoop When to select Hadoop? STRUCTURED - Enterprise data warehouses/databases can store massive data (at a cost) but impose structure (not good for active exploration) SEMI STRUCTURED data – difficult to carry out using traditional solutions (DW/DB) Warehousing data = HUGE effort and static even after implementation For variety & volume of data, crunched on commodity hardware – HADOOP Commodity H/W needed to create a Hadoop Cluster Introduction to Map Reduce /HDFS MapReduce – distribute computing over multiple servers HDFS – make data available locally for the computing process (with redundancy) Data – can be unstructured/schema-less (unlike RDBMS) Developer responsibility to make sense of data Programming MapReduce = working with Java (pros/cons), manually loading data into HDFS ===== Day 02 ===== Big Data Ecosystem -- Building Big Data ETL (Extract, Transform, Load) -- Which Big Data Tools to use and when? Hadoop vs. Other NoSQL solutions For interactive, random access to data Hbase (column oriented database) on top of Hadoop Random access to data but restrictions imposed (max 1 PB) Not good for ad-hoc analytics, good for logging, counting, time-series Sqoop - Import from databases to Hive or HDFS (JDBC/ODBC access) Flume – Stream data (e.g. log data) into HDFS Big Data Management System Moving parts, compute nodes start/fail :ZooKeeper - For configuration/coordination/naming services Complex pipeline/workflow: Oozie – manage workflow, dependencies, daisy chain Deploy, configure, cluster management, upgrade etc (sys admin) :Ambari In Cloud : Whirr Predictive Analytics -- Fundamental Techniques and Machine Learning based Business Intelligence Introduction to Machine Learning Learning classification techniques Bayesian Prediction -- preparing a training file Support Vector Machine KNN p-Tree Algebra & vertical mining Neural Networks Big Data large variable problem -- Random forest (RF) Big Data Automation problem – Multi-model ensemble RF Automation through Soft10-M Text analytic tool-Treeminer Agile learning Agent based learning Distributed learning Introduction to Open source Tools for predictive analytics : R, Python, Rapidminer, Mahut Predictive Analytics Ecosystem and its application in Criminal Intelligence Analysis Technology and the investigative process Insight analytic Visualization analytics Structured predictive analytics Unstructured predictive analytics Threat/fraudstar/vendor profiling Recommendation Engine Pattern detection Rule/Scenario discovery – failure, fraud, optimization Root cause discovery Sentiment analysis CRM analytics Network analytics Text analytics for obtaining insights from transcripts, witness statements, internet chatter, etc. Technology assisted review Fraud analytics Real Time Analytic ===== Day 03 ===== Real Time and Scalable Analytics Over Hadoop Why common analytic algorithms fail in Hadoop/HDFS Apache Hama- for Bulk Synchronous distributed computing Apache SPARK- for cluster computing and real time analytic CMU Graphics Lab2- Graph based asynchronous approach to distributed computing KNN p -- Algebra based approach from Treeminer for reduced hardware cost of operation Tools for eDiscovery and Forensics eDiscovery over Big Data vs. Legacy data – a comparison of cost and performance Predictive coding and Technology Assisted Review (TAR) Live demo of vMiner for understanding how TAR enables faster discovery Faster indexing through HDFS – Velocity of data NLP (Natural Language processing) – open source products and techniques eDiscovery in foreign languages -- technology for foreign language processing Big Data BI for Cyber Security – Getting a 360-degree view, speedy data collection and threat identification Understanding the basics of security analytics -- attack surface, security misconfiguration, host defenses Network infrastructure / Large datapipe / Response ETL for real time analytic Prescriptive vs predictive – Fixed rule based vs auto-discovery of threat rules from Meta data Gathering disparate data for Criminal Intelligence Analysis Using IoT (Internet of Things) as sensors for capturing data Using Satellite Imagery for Domestic Surveillance Using surveillance and image data for criminal identification Other data gathering technologies -- drones, body cameras, GPS tagging systems and thermal imaging technology Combining automated data retrieval with data obtained from informants, interrogation, and research Forecasting criminal activity ===== Day 04 ===== Fraud prevention BI from Big Data in Fraud Analytics Basic classification of Fraud Analytics -- rules-based vs predictive analytics Supervised vs unsupervised Machine learning for Fraud pattern detection Business to business fraud, medical claims fraud, insurance fraud, tax evasion and money laundering Social Media Analytics -- Intelligence gathering and analysis How Social Media is used by criminals to organize, recruit and plan Big Data ETL API for extracting social media data Text, image, meta data and video Sentiment analysis from social media feed Contextual and non-contextual filtering of social media feed Social Media Dashboard to integrate diverse social media Automated profiling of social media profile Live demo of each analytic will be given through Treeminer Tool Big Data Analytics in image processing and video feeds Image Storage techniques in Big Data -- Storage solution for data exceeding petabytes LTFS (Linear Tape File System) and LTO (Linear Tape Open) GPFS-LTFS (General Parallel File System -  Linear Tape File System) -- layered storage solution for Big image data Fundamentals of image analytics Object recognition Image segmentation Motion tracking 3-D image reconstruction Biometrics, DNA and Next Generation Identification Programs Beyond fingerprinting and facial recognition Speech recognition, keystroke (analyzing a users typing pattern) and CODIS (combined DNA Index System) Beyond DNA matching: using forensic DNA phenotyping to construct a face from DNA samples Big Data Dashboard for quick accessibility of diverse data and display : Integration of existing application platform with Big Data Dashboard Big Data management Case Study of Big Data Dashboard: Tableau and Pentaho Use Big Data app to push location based services in Govt. Tracking system and management ===== Day 05 ===== How to justify Big Data BI implementation within an organization: Defining the ROI (Return on Investment) for implementing Big Data Case studies for saving Analyst Time in collection and preparation of Data – increasing productivity Revenue gain from lower database licensing cost Revenue gain from location based services Cost savings from fraud prevention An integrated spreadsheet approach for calculating approximate expenses vs. Revenue gain/savings from Big Data implementation. Step by Step procedure for replacing a legacy data system with a Big Data System Big Data Migration Roadmap What critical information is needed before architecting a Big Data system? What are the different ways for calculating Volume, Velocity, Variety and Veracity of data How to estimate data growth Case studies Review of Big Data Vendors and review of their products. Accenture APTEAN (Formerly CDC Software) Cisco Systems Cloudera Dell EMC GoodData Corporation Guavus Hitachi Data Systems Hortonworks HP IBM Informatica Intel Jaspersoft Microsoft MongoDB (Formerly 10Gen) MU Sigma Netapp Opera Solutions Oracle Pentaho Platfora Qliktech Quantum Rackspace Revolution Analytics Salesforce SAP SAS Institute Sisense Software AG/Terracotta Soft10 Automation Splunk Sqrrl Supermicro Tableau Software Teradata Think Big Analytics Tidemark Systems Treeminer VMware (Part of EMC) Q/A session
pdi3 Pentaho Data Integration (PDI) - moduł do przetwarzania danych ETL (poziom zaawansowany) 21 hours Pentaho jest produktem dystrybuowanym na zasadzie licencji Open Source, który dostarcza pełnej gamy rozwiązań dla biznesu w obszarze Business Intelligence, włączając w to raportowanie, analizy danych, kokpity managerskie i integrację danych.  Dzięki platformie Pentaho poszczególne komórki biznesu uzyskują dostęp do szerokiego wachlarza cennych informacji, począwszy od analiz sprzedaży i opłacalności poszczególnych klientów czy produktów, poprzez raportowanie na potrzeby HR i działów finansowych, aż do dostarczania informacji zbiorczych na potrzeby kierownictwa wyższego szczebla. Szkolenie jest adresowane do programistów, architektów oraz administratorów aplikacji, którzy chcą tworzyć lub utrzymywać procesy ekstrakcji, transformacji i ładowania danych (ETL) z wykorzystaniem Pentaho Data Integration (PDI). Po szkoleniu uczestnik nabędzie umiejętności związane z: instalacją i konfiguracją środowiska Pentaho, projektowaniem, implementowaniem, monitorowaniem, uruchamianiem i strojeniem procesów ETL, pracy z danymi w PDI, wprowadzaniem różnych typów danych oraz różnych formatów danych filtrowaniem, grupowaniem oraz łączeniem danych harmonogramowaniem zadań, uruchamianiem transformacji,  tworzeniu klastów. Kurs ma zadanie przeprowadzić uczestnika od poziomu podstawowego do zaawansowanego. Dzień pierwszy Instalacja i konfiguracja Pentaho Data Integration Utworzenie repozytorium Zapoznanie się z interfejsem użytkownika Spoon Tworzenie transformacji Odczyt i zapis do pliku Praca z bazami danych (generator zapytań SQL) Filtrowanie, grupowanie oraz łączenie danych Praca z XLS Dzień DRUGI Tworzenie zadań Definiowanie parametrów i zmiennych Wersjonowanie danych (obsługa okresów obowiązywania) Transakcyjność bazodanowa w transformacjach Wykorzystanie JavaScript Transformacje mapujące Konwersja typów danych oraz kolejność kolumn w strumieniu Logowanie przetwarzanie Dzień Trzeci Uruchamianie transformacji i zadań z linii poleceń (kitchen.bat, pan.bat) Harmonogramowanie zadań Uruchamianie transformacji równolegle Uruchamianie zdalne (carte.bat) Tworzenie klastrów oraz partycjonowanie Wersjonowanie i praca grupowa
pbi4 Pentaho Business Intelligence (PBI) - moduły raportowe 28 hours Pentaho BI Suite to system klasy Business Intelligence (BI), przeznaczony do zaawansowanego raportowania i wielowymiarowej analizy danych biznesowych. Pozwala na łączenie danych z wielu źródeł i ich elastyczną obróbkę.  Platforma Pentaho BI Suite obejmuje następujące obszary: Raportowanie (Pentaho Reporting) Analizy danych (Analysis) Kokpity menedżerskie (Dashboards)  Data Mining  Pentaho Metadata Pentaho Data Integration (Kettle) Narzędzia wspomagające zarządzanie projektowanie aplikacji BI w środowisku Pentaho: Data Integration, Design Studio, Pentaho SDK, Report Designer, Schema Workbench (kontynuacja Pentaho Cube Designer) Poniższe szkolenie jest kompleksowym kursem, który pozwoli uczestnikom na sprawną pracę z Report Designerem oraz Business Inteligence Server oraz na ich integrqację z Pentaho Data Integration.   Report Designer (dwa dni) Poziom podstawowy (pierwszy dzień) Zapoznanie się z interfejsem użytkownika Report Designer Zapoznanie się z konsolą użytkownika (Pentaho BI Suite) Tworzenie raportów z wykorzystaniem prostych połączeń (JDBC) Wykorzystanie kreatora zapytań Formatowanie danych (style, atrybuty) Publikowanie raportów Wykorzystanie Report Wizard jako alternatywa tworzenia raportów Formatowanie warunkowe Użycie parametrów (prompty) Tworzenie wykresów Poziom zaawansowany (drugi dzień) Definiowanie połączeń z XML, Tabele, PDI Tworzenie raportów z podzapytaniami (podraporty) oraz z zapytaniami JavaScript Zaawansowane formatowanie danych (HTML, JS, EXCEL, Eventy) Wykorzystanie parametrów i funkcji Wykorzystanie transformacji (PDI) jako źródło danych Business Intelligence Server (Jeden dzień) Konsola użytkownika Zapoznanie się z interfejsem użytkownika Tworzenie raportów na bazie połączeń od baz oraz do pliku CSV Definiowanie zapytań Harmonogramowanie raportów Udostępnianie Konsola administratora Użytkownicy i grupy Definiowanie połączeń Harmonogramowanie Data Integration na potrzeby report designer (jeden dzień) Podstawy tworzenia transformacji w PeNtaho Data Integration (PDI) Omówienie rodzajów danych wejściowych Przetwarzanie danych przy pomocy prostych mechanizmów Filtrowanie danych Złączenia danych Grupowanie Integracja Data Integrator z Report Designer Tworzenie połączenia do transformacji Wykorzystanie danych z transformacji
kibana Kibana: Essentials 14 hours This training introduces Kibana to the users of Elastic Search. Kibana is an open source analytics and visualization platform designed to work with Elasticsearch. You use Kibana to search, view, and interact with data stored in Elasticsearch indices. You can easily perform advanced data analysis and visualize your data in a variety of charts, tables, and maps. Kibana makes it easy to understand large volumes of data. Its simple, browser-based interface enables you to quickly create and share dynamic dashboards that display changes to Elasticsearch queries in real time. Setting up Prerequisites Elasticsearch: Introduction  Elasticsearch: Installation and Configuration elasticdump Brief Introduction to Kibana Nested Objects - Limitation to Kibana Setting up Kibana Kibana: Install and Configure Configuring Elasticsearch and connecting Kibana Dynamic Mapping Limitations Tribe Nodes Using Kibana Indices and Filters Discover Interface Time Filter Toolbar and Searchbar Field Lists Document Data and Context - Add/View/Edit/Delete Visualization Interface Aggregations Bucket Aggregations - Date Histogram, Date Range, Range, Histogram, Terms and Filters Metric Aggregations - Count, Sum, Average, Min, Max, Percentile, Percentile Ranks and Unique Create Visualization Chart, Line, Area Data Table Metrics Other Visualization Types Dashboard Interface: Building, Merging, Loading and Sharing Graph: Configure, Troubleshoot and Limitations Kibana: Dev Console Overview Shortcuts: Brief Settings and Configuring Kibana in Production SSL encryption Load Balancing using Elasticsearch Nodes Management Managing Fields and Formatters Saved Searches, Visualizationad and Dashboards Apache/nginx proxy for security Plugins Install/Update/Disable/Remove Plugins Plugins Manager  
TalendDI Talend Open Studio for Data Integration 28 hours Talend Open Studio for Data Integration is an open-source data integration product used to combine, convert and update data in various locations across a business. In this instructor-led, live training, participants will learn how to use the Talend ETL tool to carry out data transformation, data extraction, and connectivity with Hadoop, Hive, and Pig.   By the end of this training, participants will be able to Explain the concepts behind ETL (Extract, Transform, Load) and propagation Define ETL methods and ETL tools to connect with Hadoop Efficiently amass, retrieve, digest, consume, transform and shape big data in accordance to business requirements Audience Business intelligence professionals Project managers Database professionals SQL Developers ETL Developers Solution architects Data architects Data warehousing professionals System administrators and integrators Format of the course Part lecture, part discussion, exercises and heavy hands-on practice To request a customized course outline for this training, please contact us.  
PentahoDI Pentaho Data Integration Fundamentals 21 hours Pentaho Data Integration is an open-source data integration tool for defining jobs and data transformations. In this instructor-led, live training, participants will learn how to use Pentaho Data Integration's powerful ETL capabilities and rich GUI to manage an entire big data lifecycle, maximizing the value of data to the organization. By the end of this training, participants will be able to: Create, preview, and run basic data transformations containing steps and hops Configure and secure the Pentaho Enterprise Repository Harness disparate sources of data and generate a single, unified version of the truth in an analytics-ready format. Provide results to third-part applications for further processing Audience Data Analyst ETL developers Format of the course Part lecture, part discussion, exercises and heavy hands-on practice To request a customized course outline for this training, please contact us.
JasperSoftBI JasperSoft BI 14 hours JasperReports is an open-source reporting library that can be embedded into any Java application. JasperReports Server is a Java EE web application with advanced reporting capabilities, including scheduling and permissions. In this instructor-led, live training, participants will learn to view and interact with business data as well as create and design reports and dashboards that are viewable on phones and tablets. By the end of this training, participants will be able to: Set up and configure a JasperSoft ETL project Design and run an ETL job Use iReport to generate charts, images, sub-reports, and cross tabs Audience BI analysts ETL developers Database professionals Format of the course Part lecture, part discussion, exercises and heavy hands-on practice   To request a customized course outline for this training, please contact us.  

Upcoming Courses

Other regions

Weekend Business Intelligence courses, Evening Business Intelligence training, Business Intelligence boot camp, Business Intelligence instructor-led , Business Intelligence classes, Business Intelligence training courses, Business Intelligence one on one training , Business Intelligence instructor, Business Intelligence trainer , Business Intelligence coaching, Evening Business Intelligence courses, Business Intelligence on-site,Weekend Business Intelligence training

Course Discounts

Course Venue Course Date Course Price [Remote / Classroom]
Adobe Creative Cloud - Montaż video Katowice ul. Opolska 22 Mon, 2018-01-29 09:00 3861PLN / 2455PLN
DTP (InDesign, Photoshop, Illustrator, Acrobat) Kielce, ul. Warszawska 19 Mon, 2018-01-29 09:00 5940PLN / 2980PLN
Angular 4 - dobre praktyki Katowice ul. Opolska 22 Tue, 2018-01-30 09:00 7920PLN / 3450PLN
Psychology of interpersonal cooperation Wrocław, ul.Ludwika Rydygiera 2a/22 Tue, 2018-01-30 09:00 5148PLN / 1430PLN
Effective interpersonal communication with elements of assertiveness Warszawa, ul. Złota 3/11 Wed, 2018-01-31 09:00 5148PLN / 1430PLN
Agile Software Testing Zielona Góra, ul. Reja 6 Thu, 2018-02-01 09:00 4257PLN / 2629PLN
PostgreSQL Administration and Development Katowice ul. Opolska 22 Mon, 2018-02-05 09:00 7821PLN / 4007PLN
DTP (InDesign, Photoshop, Illustrator, Acrobat) Opole, Wladyslawa Reymonta 29 Mon, 2018-02-05 09:00 5940PLN / 4230PLN
Międzynarodowe Standardy Rachunkowości i Sprawozdawczości Finansowej (MSR, MSSF) (IAS, IFRS) Poznan, Garbary 100/63 Fri, 2018-02-09 09:00 3950PLN / 1188PLN
Creating and managing Web sites Katowice ul. Opolska 22 Mon, 2018-02-12 09:00 5841PLN / 3048PLN
Social Media - facebook, twitter, blog, youtube, google+ Rzeszów, Plac Wolności 13 Tue, 2018-02-13 09:00 1881PLN / 952PLN
Javascript Basics Poznan, Garbary 100/63 Tue, 2018-02-13 09:00 4455PLN / 1885PLN
SQL in Microsoft Access Kraków, ul. Rzemieślnicza 1 Thu, 2018-02-15 09:00 10266PLN / 3911PLN
Effective interpersonal communication with elements of assertiveness Gdynia, ul. Ejsmonda 2 Mon, 2018-02-19 09:00 5148PLN / 1530PLN
DTP (InDesign, Photoshop, Illustrator, Acrobat) Katowice ul. Opolska 22 Mon, 2018-03-05 09:00 5940PLN / 3730PLN
Visual Basic for Applications (VBA) in Excel - Introduction to programming Katowice ul. Opolska 22 Mon, 2018-03-05 09:00 3564PLN / 2291PLN
Certified Agile Tester Katowice ul. Opolska 22 Mon, 2018-04-02 09:00 8910PLN / 4720PLN
Perfect tester Szczecin, ul. Sienna 9 Wed, 2018-04-04 09:00 5920PLN / 2294PLN
Quality Assurance and Continuous Integration Katowice ul. Opolska 22 Thu, 2018-04-12 09:00 2673PLN / 2037PLN
Oracle 12c – wprowadzenie do języka SQL Łódź, ul. Tatrzańska 11 Tue, 2018-06-12 09:00 3960PLN / 1920PLN

Course Discounts Newsletter

We respect the privacy of your email address. We will not pass on or sell your address to others.
You can always change your preferences or unsubscribe completely.

Some of our clients