Szkolenia Hadoop

Szkolenia Hadoop

Apache Hadoop jest rozwiązaniem typu open-source implementującym dwa główne rozwiązania typu BigData firmy Google: GFS (Google File System) i model programistyczny MapReduce. Jest on kompletnym systemem służącym do przechowywania i przetwarzania dużych zbiorów danych. Hadoop wykorzystywany jest przez większość światowych liderów w dziedzinie usług opartych na chmurze obliczeniowej, takich jak Yachoo, Facebook czy LinkedIn.

Opinie uczestników

Big Data Hadoop Analyst Training

Część praktyczna.

Arkadiusz Iwaszko - NIIT Limited

Big Data Hadoop Administration Training

1. Sprzęt pierwsza klasa
2. Dobre pierwsze wprowadzenie w świat Hadoop'a oraz w technologie

Przemysław Ćwik - Delphi Poland SA

Administrator Training for Apache Hadoop

Trainer give reallive Examples

Simon Hahn - OPITZ CONSULTING Deutschland GmbH

Administrator Training for Apache Hadoop

Big competences of Trainer

Grzegorz Gorski - OPITZ CONSULTING Deutschland GmbH

Administrator Training for Apache Hadoop

Many hands-on sessions.

Jacek Pieczątka - OPITZ CONSULTING Deutschland GmbH

A practical introduction to Data Analysis and Big Data

Willingness to share more

Balaram Chandra Paul - MOL Information Technology Asia Limited

Podkategorie

Plany Szkoleń Hadoop

Kod Nazwa Czas trwania Charakterystyka kursu
hadoopadm1 Hadoop For Administrators 21 godz. Apache Hadoop is the most popular framework for processing Big Data on clusters of servers. In this three (optionally, four) days course, attendees will learn about the business benefits and use cases for Hadoop and its ecosystem, how to plan cluster deployment and growth, how to install, maintain, monitor, troubleshoot and optimize Hadoop. They will also practice cluster bulk data load, get familiar with various Hadoop distributions, and practice installing and managing Hadoop ecosystem tools. The course finishes off with discussion of securing cluster with Kerberos. “…The materials were very well prepared and covered thoroughly. The Lab was very helpful and well organized” — Andrew Nguyen, Principal Integration DW Engineer, Microsoft Online Advertising Audience Hadoop administrators Format Lectures and hands-on labs, approximate balance 60% lectures, 40% labs. Introduction Hadoop history, concepts Ecosystem Distributions High level architecture Hadoop myths Hadoop challenges (hardware / software) Labs: discuss your Big Data projects and problems Planning and installation Selecting software, Hadoop distributions Sizing the cluster, planning for growth Selecting hardware and network Rack topology Installation Multi-tenancy Directory structure, logs Benchmarking Labs: cluster install, run performance benchmarks HDFS operations Concepts (horizontal scaling, replication, data locality, rack awareness) Nodes and daemons (NameNode, Secondary NameNode, HA Standby NameNode, DataNode) Health monitoring Command-line and browser-based administration Adding storage, replacing defective drives Labs: getting familiar with HDFS command lines Data ingestion Flume for logs and other data ingestion into HDFS Sqoop for importing from SQL databases to HDFS, as well as exporting back to SQL Hadoop data warehousing with Hive Copying data between clusters (distcp) Using S3 as complementary to HDFS Data ingestion best practices and architectures Labs: setting up and using Flume, the same for Sqoop MapReduce operations and administration Parallel computing before mapreduce: compare HPC vs Hadoop administration MapReduce cluster loads Nodes and Daemons (JobTracker, TaskTracker) MapReduce UI walk through Mapreduce configuration Job config Optimizing MapReduce Fool-proofing MR: what to tell your programmers Labs: running MapReduce examples YARN: new architecture and new capabilities YARN design goals and implementation architecture New actors: ResourceManager, NodeManager, Application Master Installing YARN Job scheduling under YARN Labs: investigate job scheduling Advanced topics Hardware monitoring Cluster monitoring Adding and removing servers, upgrading Hadoop Backup, recovery and business continuity planning Oozie job workflows Hadoop high availability (HA) Hadoop Federation Securing your cluster with Kerberos Labs: set up monitoring Optional tracks Cloudera Manager for cluster administration, monitoring, and routine tasks; installation, use. In this track, all exercises and labs are performed within the Cloudera distribution environment (CDH5) Ambari for cluster administration, monitoring, and routine tasks; installation, use. In this track, all exercises and labs are performed within the Ambari cluster manager and Hortonworks Data Platform (HDP 2.0)
hbasedev HBase for Developers 21 godz. This course introduces HBase – a NoSQL store on top of Hadoop.  The course is intended for developers who will be using HBase to develop applications,  and administrators who will manage HBase clusters. We will walk a developer through HBase architecture and data modelling and application development on HBase. It will also discuss using MapReduce with HBase, and some administration topics, related to performance optimization. The course  is very  hands-on with lots of lab exercises. Duration : 3 days Audience : Developers  & Administrators Section 1: Introduction to Big Data & NoSQL Big Data ecosystem NoSQL overview CAP theorem When is NoSQL appropriate Columnar storage HBase and NoSQL Section 2 : HBase Intro Concepts and Design Architecture (HMaster and Region Server) Data integrity HBase ecosystem Lab : Exploring HBase Section 3 : HBase Data model Namespaces, Tables and Regions Rows, columns, column families, versions HBase Shell and Admin commands Lab : HBase Shell Section 3 : Accessing HBase using Java API Introduction to Java API Read / Write path Time Series data Scans Map Reduce Filters Counters Co-processors Labs (multiple) : Using HBase Java API to implement  time series , Map Reduce, Filters and counters. Section 4 : HBase schema Design : Group session students are presented with real world use cases students work in groups to come up with design solutions discuss / critique and learn from multiple designs Labs : implement a scenario in HBase Section 5 : HBase Internals Understanding HBase under the hood Memfile / HFile / WAL HDFS storage Compactions Splits Bloom Filters Caches Diagnostics Section 6 : HBase installation and configuration hardware selection install methods common configurations Lab : installing HBase Section 7 : HBase eco-system developing applications using HBase interacting with other Hadoop stack (MapReduce, Pig, Hive) frameworks around HBase advanced concepts (co-processors) Labs : writing HBase applications Section 8 : Monitoring And Best Practices monitoring tools and practices optimizing HBase HBase in the cloud real world use cases of HBase Labs : checking HBase vitals
bdhat Big Data Hadoop Analyst Training 28 godz. Big Data Analyst Training to praktyczny kurs, który polecany jest każdemu, kto chce w przyszłości zostać ekspertem Data Scientist. Kurs skupia sie na aspektach potrzebnych do pracy nowoczesnego analityka w technologii Big Data. W trakcie kursu prezentowane są narzędzia pozwalające na uzyskanie dostępu, zmianę, transformację i analizę skomplikowanych struktur danych umieszczonych w klastrze Hadoop. W trakcie kursu będą poruszane tematy w ramach technologii Hadoop Ecosystem (Pig, Hive, Impala, ELK i inne). Funkcjonaloność narzędzi Pig, Hive, Impala, ELK, pozwalające na zbieranie danych, zapisywanie wyników i analitykę. Jak Pig, Hive i Impala mogą podnieść wydajność typowych i codziennych zadań analitycznych. Wykonywanie w czasie rzeczywistym interaktywnych analiz ogromnych zbiorów danych aby uzyskać cenne i wartościowe elementy dla biznesu oraz jak interpretować wnioski. Wykonywanie złożonych zapytań na bardzo dużych wolumenach danych. Podstawy Hadoop. Wprowadzenie do Pig. Podstawowa analiza danych z wykorzystaniem narzędzia Pig. Procesowanie złożonych danych z Pig. Operacje na wielu zbiorach danych z wykorzytaniem Pig. Rozwiązywanie problemów i optymalizacja Pig. Wprowadzenie do Hive, Impala, ELK. Wykonywanie zapytań w Hive, Impala, ELK. Zarządzanie danymi w Hive. Przechowywanie danych i wydajność. Analizy z wykorzystaniem narzędzi Hive i Impala. Praca z narzędziem Impala i ELK. Analiza tekstu i złożonych typów danych. Optymalizacja Hive, Pig, Impala, ELK. Interoperacyjność i przepływ pracy. Pytania, zadania, certyfikacja.
hivehiveql Data Analysis with Hive/HiveQL 7 godz. This course covers how to use Hive SQL language (AKA: Hive HQL, SQL on Hive, HiveQL) for people who extract data from Hive Hive Overview Architecture and design Aata types SQL support in Hive Creating Hive tables and querying Partitions Joins Text processing labs : various labs on processing data with Hive DQL (Data Query Language) in Detail SELECT clause Column aliases Table aliases Date types and Date functions Group function Table joins JOIN clause UNION operator Nested queries Correlated subqueries
IntroToAvro Apache Avro: Data serialization for distributed applications 14 godz. This course is intended for Developers Format of the course Lectures, hands-on practice, small tests along the way to gauge understanding Principles of distributed computing Apache Spark Hadoop Principles of data serialization How data object is passed over the network Serialization of objects Serialization approaches Thrift Protocol Buffers Apache Avro data structure size, speed, format characteristics persistent data storage integration with dynamic languages dynamic typing schemas untagged data change management Data serialization and distributed computing Avro as a subproject of Hadoop Java serialization Hadoop serialization Avro serialization Using Avro with Hive (AvroSerDe) Pig (AvroStorage) Porting Existing RPC Frameworks
mdlmrah Model MapReduce w implementacji oprogramowania Apache Hadoop 14 godz. Szkolenie skierowane jest do organizacji chcących wdrożyć rozwiązania pozwalające na przetwarzanie dużych zbiorów danych za pomocą klastrów. Data Mining i Bussiness Intelligence Wprowadzenie Obszary zastosowań Możliwości Podstawy eksploracji danych i odkrywania wiedzy Big data Co rozumiemy pod pojęciem Big data? Big data a Data mining MapReduce Opis modelu Przykładowe zastosowanie Statystyki Model klastra Hadoop Czym jest Hadoop Instalacja Podstawowa konfiguracja Ustawienia klastra Architektura i konfiguracja Hadoop Distributed File System Komendy i obsługa z konsoli Narzędzie DistCp MapReduce i Hadoop Streaming Administracja i konfiguracja Hadoop On Demand Alternatywne rozwiązania
bigddbsysfun Big Data & Database Systems Fundamentals 14 godz. The course is part of the Data Scientist skill set (Domain: Data and Technology). Data Warehousing Concepts What is Data Ware House? Difference between OLTP and Data Ware Housing Data Acquisition Data Extraction Data Transformation. Data Loading Data Marts Dependent vs Independent data Mart Data Base design ETL Testing Concepts: Introduction. Software development life cycle. Testing methodologies. ETL Testing Work Flow Process. ETL Testing Responsibilities in Data stage.       Big data Fundamentals Big Data and its role in the corporate world The phases of development of a Big Data strategy within a corporation Explain the rationale underlying a holistic approach to Big Data Components needed in a Big Data Platform Big data storage solution Limits of Traditional Technologies Overview of database types NoSQL Databases Hadoop Map Reduce Apache Spark
Przygotowanie do egzaminu CCAH (Certified Administrator for Apache Hadoop) 35 godz. Kurs przeznaczony jest dla specjalistów z branży IT pracujących nad rozwiązaniami wymagającymi przechowywania i przetwarzania dużych zbiorów danych w systemach rozproszonych Cel szkolenia: zdobycie wiedzy na temat administracji systemem Apache Hadoop przygotowanie do egzaminu CCAH (Cloudera Certified Administrator for Apache Hadoop) 1: HDFS (38%) Funkcje poszczególnych daemonów systemu Apache Hadoop Przechowywanie i przetwarzanie danych w sytemie Hadoop W jakich okolicznościach powinniśmy wybrać system Hadoop Architektura i zasada działania HDFS Federacje HDFS HDFS High Availability Bezpieczeństwo HDFS (Kerberos) Proces odczytu i zapisu plików w HDFS 2: MapReduce (10%) Zasady działania MapReduce v1 Zasady działania MapReduce v2 (YARN) 3: Planowanie Klastra Systemu Hadoop (12%) Wybór sprzętu i systemu operacyjnego Analiza wymagań Dopasowywanie parametrów jądra i konfiguracji pamięci masowej Dopasowywanie konfiguracji sprzętowej do wymagań Skalowalność systemu: obciążenie procesora, pamięci operacyjnej, pamięci masowej (IO) oraz pojemności systemu Skalowalność na poziomie pamięci masowej: JBOD vs RAID, dyski sieciowe i wpływ wirtualizacji na wydajność systemu Topologie sieciowe: obiążenie sieci w systemie Hadoop (HDFS i MapReduce) i optymalizacja połączeń 4: Instalacja i Administracja Klastrem Systemu Hadoop (17%) Wpływ awarii na działanie klastra Monitorowanie logów Podstawowe metryki wykorzystywane przez klaster systemu Hadoop Narzędzia do monitorowania klastra systemu Hadoop Narzędzia do administracji klastrem systemu Hadoop 5: Zarządzanie Zasobami (6%) Architektura i funkcje kolejek Alokacja zasobów przez kolejki FIFO Alokacja zasobów przez kolejki sprawiedliwe Alokacja zasobów przez kolejki pojemnościowe 6: Monitorowanie i Logowanie (12%) Monitorowanie metryk Zarządzanie NameNodem i JobTrackerem z poziomu Web GUI Konfiguracja log4j Jak monitorować daemony systemu Hadoop Monitorowanie zurzycia CPU na kluczowych serwerach w klastrze Monitorowanie zurzycia pamięci RAM i swap Zarządzanie i przeglądanie logów Interpretacja logów 7: Środowisko Systemu Hadoop (5%) Narzędzia pomocnicze
druid Druid: Build a fast, real-time data analysis system 21 godz. Druid is an open-source, column-oriented, distributed data store written in Java. It was designed to quickly ingest massive quantities of event data and execute low-latency OLAP queries on that data. Druid is commonly used in business intelligence applications to analyze high volumes of real-time and historical data. It is also well suited for powering fast, interactive, analytic dashboards for end-users. Druid is used by companies such as Alibaba, Airbnb, Cisco, eBay, Netflix, Paypal, and Yahoo. In this course we explore some of the limitations of data warehouse solutions and discuss how Druid can compliment those technologies to form a flexible and scalable streaming analytics stack. We walk through many examples, offering participants the chance to implement and test Druid-based solutions in a lab environment. Audience     Application developers     Software engineers     Technical consultants     DevOps professionals     Architecture engineers Format of the course     Part lecture, part discussion, heavy hands-on practice, occasional tests to gauge understanding Introduction Installing and starting Druid Druid architecture and design Real-time ingestion of event data Sharding and indexing Loading data Querying data Visualizing data Running a distributed cluster Druid + Apache Hive Druid + Apache Kafka Druid + others Troubleshooting Administrative tasks
apacheh Administrator Training for Apache Hadoop 35 godz. Głównym celem szkolenia jest zdobycie wiedzy z administracji systemem Apache Hadoop w środowiskach MapReduce oraz YARN na poziomie zaawansowanym. Tematyka szkolenia dotyczy w głównej mierze architektury systemu Hadoop, a w szczególności systemu plików HDFS oraz modeli programistycznych MapReduce i YARN oraz zagadnień związanych z planowaniem, instalacją, konfiguracją, administracją, zarządzaniem i monitorowaniem klastra systemu Hadoop. Pozostałe zagadnienia związane z tematyką BigData takie jak HBase, Cassandra, Impala, Pig, Hiver oraz Sqoop są również omówione, choć pobieżnie. Kurs przeznaczony jest w głównej mierze do specjalistów z branży IT, którzy chcą przygotować się i zdać egzamin CCAH (Cloudera Certified administrator for Apache Hadoop). 1: HDFS (17%) Funkcje poszczególnych daemonów systemu Apache Hadoop Przechowywanie i przetwarzanie danych w sytemie Hadoop W jakich okolicznościach powinniśmy wybrać system Hadoop Architektura i zasada działania HDFS Federacje HDFS HDFS High Availability Bezpieczeństwo HDFS (Kerberos) Studiowanie przypadków Proces odczytu i zapisu plików w HDFS Interfejsk tekstowy HDFS 2: YARN i MapReduce w wersji 2 (MRv2) (17%): Konfiguracja YARN Wdrażanie YARN Architektura i zasada działania YARN Alokacja zasobów w YARN Przebieg wykonania zadań w YARN Migracja z MRv1 do YARN 3: Planowanie Klastra Systemu Hadoop (16%) Analiza wymagań i wybór sprzętu Analiza wymagań i wybór systemu operacyjnego Dobór parametrów jądra i konfiguracji pamięci masowej Dobór konfiguracji sprzętowej do wymagań Dobór podzespołów klastra i narzędzi pomocniczych Skalowalność systemu: obciążenie procesora, pamięci operacyjnej, pamięci masowej (IO) oraz pojemności systemu Skalowalność na poziomie pamięci masowej: JBOD vs RAID, dyski sieciowe i wpływ wirtualizacji na wydajność systemu Topologie sieciowe: obiążenie sieci w systemie Hadoop (HDFS i MapReduce) i optymalizacja połączeń 4: Instalacja i Administracja Klastrem Systemu Hadoop (25%) Wpływ awari na działanie klastra Monitorowanie logów Podstawowe metryki wykorzystywane przez klaster systemu Hadoop Narzędzia do monitorowania klastra systemu Hadoop Narzędzia pomocnicze: Impala, Flume, Oozie, Hue, Cloudera Manager, Sqoop, Hive, Pig i inne Narzędzia do administracji klastrem systemu Hadoop 5: Zarządzanie Zasobami (10%) Architektura i funkcje kolejek Alokacja zasobów przez kolejki FIFO Alokacja zasobów przez kolejki sprawiedliwe Alokacja zasobów przez kolejki pojemnościowe 6: Monitorowanie i Logowanie (15%) Monitorowanie metryk Zarządzanie NameNodem i JobTrackerem z poziomu Web GUI Jak monitorować daemony systemu Hadoop Monitorowanie zużycia CPU na kluczowych serwerach w klastrze Monitorowanie zużycia pamięci RAM i swap Zarządzanie i przeglądanie logów Interpretacja logów
voldemort Voldemort: Setting up a key-value distributed data store 14 godz. Voldemort is an open-source distributed data store that is designed as a key-value store.  It is used at LinkedIn by numerous critical services powering a large portion of the site. This course will introduce the architecture and capabilities of Voldomort and walk participants through the setup and application of a key-value distributed data store. Audience     Software developers     System administrators     DevOps engineers Format of the course     Part lecture, part discussion, heavy hands-on practice, occasional tests to gauge understanding Introduction Understanding distributed key-value storage systems Voldomort data model and architecture Downloading and configuration Command line operations Clients and servers Working with Hadoop Configuring build and push jobs Rebalancing a Voldemort instance Serving Large-scale Batch Computed Data Using the Admin Tool Performance tuning
hadoopadm Big Data Hadoop Administration Training 21 godz. Szkolenie pozwoli w pełni zapoznać się i zrozumieć wszystkie niezbędne kroki do obsługi i utrzymywania klastra Hadoop. Dostarcza wiedzę począwszy od zagadnień związanych ze specyfikacją sprzętu, instalacją i konfiguracją systemu, aż do zagadnien związanych z równoważeniem obciążenia, strojeniem, diagnozowaniem i rozwiązywaniu problemów  przy wdrożeniu. Kurs dedykowany administratorom, którzy będą tworzyć lub/i utrzymywać klaster Hadoop. Materiały szkoleniowe Materiały szkoleniowe Student Guide Materiały szkoleniowe Lab Guide Apache Hadoop i HDFS Ładowanie danych do Hadoop'a YARN i MapReduce Planowanie własnego klastra Instalacja i startowa konfiguracja klastra Hadoop Instalacja i konfiguracja Hive, Impala i Pig Klienci Hadoop Dystrybucje Hadoop i jaką wybrać dla siebie Zaawansowana konfiguracja klastra Bezpieczeństwo Hadoop Zarządzanie i cykliczne uruchamianie zadań Utrzymanie klastra Rozwiązywanie problemów i monitoring klastra Intergracja Hadoop'a z rozwiązaniami do integracji danych z narzędziami do integracji danych (np. SAS Data Integration Studio, Informatica PC, IBM Data Stage, Oracle Data Integrator, SQL Server Integration Services, Ablnitio)
BigData_ A practical introduction to Data Analysis and Big Data 28 godz. Participants who complete this training will gain a practical, real-world understanding of Big Data and its related technologies, methodologies and tools. Participants will have the opportunity to put this knowledge into practice through hands-on exercises. Group interaction and instructor feedback make up an important component of the class. The course starts with an introduction to elemental concepts of Big Data, then progresses into the programming languages and methodologies used to perform Data Analysis. Finally, we discuss the tools and infrastructure that enable Big Data storage, Distributed Processing, and Scalability. Audience Developers / programmers IT consultants Format of the course     Part lecture, part discussion, heavy hands-on practice and implementation, occasional quizing to measure progress. Introduction to Data Analysis and Big Data What makes Big Data "big"? Velocity, Volume, Variety, Veracity (VVVV) Limits to traditional Data Processing Distributed Processing Statistical Analysis Types of Machine Learning Analysis Data Visualization Languages used for Data Analysis R language (crash course) Why R for Data Analysis? Data manipulation, calculation and graphical display Python (crash course) Why Python for Data Analysis? Manipulating, processing, cleaning, and crunching data Approaches to Data Analysis Statistical Analysis Time Series analysis Forecasting with Correlation and Regression models Inferential Statistics (estimating) Descriptive Statistics in Big Data sets (e.g. calculating mean) Machine Learning Supervised vs unsupervised learning Classification and clustering Estimating cost of specific methods Filtering Natural Language Processing Processing text Understaing meaning of the text Automatic text generation Sentiment/Topic Analysis Computer Vision Acquiring, processing, analyzing, and understanding images Reconstructing, interpreting and understanding 3D scenes Using image data to make decisions Big Data infrastructure Data Storage Relational databases (SQL) MySQL Postgres Oracle Non-relational databases (NoSQL) Cassandra MongoDB Neo4js Understanding the nuances Hierarchical databases Object-oriented databases Document-oriented databases Graph-oriented databases Other Distributed Processing Hadoop HDFS as a distributed filesystem MapReduce for distributed processing Spark All-in-one in-memory cluster computing framework for large-scale data processing Structured streaming Spark SQL Machine Learning libraries: MLlib Graph processing with GraphX Search Engines ElasticSearch Solr Scalability Public cloud AWS, Google, Aliyun, etc. Private cloud OpenStack, Cloud Foundry, etc. Auto-scalability Choosing right solution for the problem The future of Big Data Closing remarks  
68737 Hadoop for Data Analysts 14 godz. Hadoop Fundamentals The Motivation for Hadoop Hadoop Overview HDFS MapReduce The Hadoop Ecosystem Lab Scenario Explanation Hands-On Exercise: Data Ingest with Hadoop Tools Introduction to Pig What Is Pig? Pig’s Features Pig Use Cases Interacting with Pig Basic Data Analysis with Pig Pig Latin Syntax Loading Data Simple Data Types Field Definitions Data Output Viewing the Schema Filtering and Sorting Data Commonly-Used Functions Hands-On Exercise: Using Pig for ETL Processing Processing Complex Data with Pig Storage Formats Complex/Nested Data Types Grouping Built-in Functions for Complex Data Iterating Grouped Data Hands-On Exercise: Analyzing Ad Campaign Data with Pig Multi-Dataset Operations with Pig Techniques for Combining Data Sets Joining Data Sets in Pig Set Operations Splitting Data Sets Hands-On Exercise: Analyzing Disparate Data Sets with Pig Extending Pig Adding Flexibility with Parameters Macros and Imports UDFs Contributed Functions Using Other Languages to Process Data with Pig Hands-On Exercise: Extending Pig with Streaming and UDFs Pig Troubleshooting and Optimization Troubleshooting Pig Logging Using Hadoop’s Web UI Optional Demo: Troubleshooting a Failed Job with the Web UI Data Sampling and Debugging Performance Overview Understanding the Execution Plan Tips for Improving the Performance of Your Pig Jobs Introduction to Hive What Is Hive? Hive Schema and Data Storage Comparing Hive to Traditional Databases Hive vs. Pig Hive Use Cases Interacting with Hive Relational Data Analysis with Hive Hive Databases and Tables Basic HiveQL Syntax Data Types Joining Data Sets Common Built-in Functions Hands-On Exercise: Running Hive Queries on the Shell, Scripts, and Hue Hive Data Management Hive Data Formats Creating Databases and Hive-Managed Tables Loading Data into Hive Altering Databases and Tables Self-Managed Tables Simplifying Queries with Views Storing Query Results Controlling Access to Data Hands-On Exercise: Data Management with Hive Text Processing with Hive Overview of Text Processing Important String Functions Using Regular Expressions in Hive Sentiment Analysis and N-Grams Hands-On Exercise (Optional): Gaining Insight with Sentiment Analysis Hive Optimization Understanding Query Performance Controlling Job Execution Plan Partitioning Bucketing Indexing Data Extending Hive SerDes Data Transformation with Custom Scripts User-Defined Functions Parameterized Queries Hands-On Exercise: Data Transformation with Hive Introduction to Impala What is Impala? How Impala Differs from Hive and Pig How Impala Differs from Relational Databases Limitations and Future Directions Using the Impala Shell Analyzing Data with Impala Basic Syntax Data Types Filtering, Sorting, and Limiting Results Joining and Grouping Data Improving Impala Performance Hands-On Exercise: Interactive Analysis with Impala Choosing the Best Tool for the Job Comparing MapReduce, Pig, Hive, Impala, and Relational Databases Which to Choose?
ApHadm1 Apache Hadoop: Manipulation and Transformation of Data Performance 21 godz. This course is intended for developers, architects, data scientists or any profile that requires access to data either intensively or on a regular basis. The major focus of the course is data manipulation and transformation. Among the tools in the Hadoop ecosystem this course includes the use of Pig and Hive both of which are heavily used for data transformation and manipulation. This training also addresses performance metrics and performance optimisation. The course is entirely hands on and is punctuated by presentations of the theoretical aspects. 1.1Hadoop Concepts 1.1.1HDFS The Design of HDFS Command line interface Hadoop File System 1.1.2Clusters Anatomy of a cluster Mater Node / Slave node Name Node / Data Node 1.2Data Manipulation 1.2.1MapReduce detailed Map phase Reduce phase Shuffle 1.2.2Analytics with Map Reduce Group-By with MapReduce Frequency distributions and sorting with MapReduce Plotting results (GNU Plot) Histograms with MapReduce Scatter plots with MapReduce Parsing complex datasets Counting with MapReduce and Combiners Build reports   1.2.3Data Cleansing Document Cleaning Fuzzy string search Record linkage / data deduplication Transform and sort event dates Validate source reliability Trim Outliers 1.2.4Extracting and Transforming Data Transforming logs Using Apache Pig to filter Using Apache Pig to sort Using Apache Pig to sessionize 1.2.5Advanced Joins Joining data in the Mapper using MapReduce Joining data using Apache Pig replicated join Joining sorted data using Apache Pig merge join Joining skewed data using Apache Pig skewed join Using a map-side join in Apache Hive Using optimized full outer joins in Apache Hive Joining data using an external key value store 1.3Performance Diagnosis and Optimization Techniques Map Investigating spikes in input data Identifying map-side data skew problems Map task throughput Small files Unsplittable files Reduce Too few or too many reducers Reduce-side data skew problems Reduce tasks throughput Slow shuffle and sort Competing jobs and scheduler throttling Stack dumps & unoptimized code Hardware failures CPU contention Tasks Extracting and visualizing task execution times Profiling your map and reduce tasks Avoid the reducer Filter and project Using the combiner Fast sorting with comparators Collecting skewed data Reduce skew mitigation
hadoopmapr Hadoop Administration on MapR 28 godz. Audience: This course is intended to demystify big data/hadoop technology and to show it is not difficult to understand. Big Data Overview: What is Big Data Why Big Data is gaining popularity Big Data Case Studies Big Data Characteristics Solutions to work on Big Data. Hadoop & Its components: What is Hadoop and what are its components. Hadoop Architecture and its characteristics of Data it can handle /Process. Brief on Hadoop History, companies using it and why they have started using it. Hadoop Frame work & its components- explained in detail. What is HDFS and Reads -Writes to Hadoop Distributed File System. How to Setup Hadoop Cluster in different modes- Stand- alone/Pseudo/Multi Node cluster. (This includes setting up a Hadoop cluster in VirtualBox/KVM/VMware, Network configurations that need to be carefully looked into, running Hadoop Daemons and testing the cluster). What is Map Reduce frame work and how it works. Running Map Reduce jobs on Hadoop cluster. Understanding Replication , Mirroring and Rack awareness in context of Hadoop clusters. Hadoop Cluster Planning: How to plan your hadoop cluster. Understanding hardware-software to plan your hadoop cluster. Understanding workloads and planning cluster to avoid failures and perform optimum. What is MapR and why MapR : Overview of MapR and its architecture. Understanding & working of MapR Control System, MapR Volumes , snapshots & Mirrors. Planning a cluster in context of MapR. Comparison of MapR with other distributions and Apache Hadoop. MapR installation and cluster deployment. Cluster Setup & Administration: Managing services, nodes ,snapshots, mirror volumes and remote clusters. Understanding and managing Nodes. Understanding of Hadoop components, Installing Hadoop components alongside MapR Services. Accessing Data on cluster including via NFS Managing services & nodes. Managing data by using volumes, managing users and groups, managing & assigning roles to nodes, commissioning decommissioning of nodes, cluster administration and performance monitoring, configuring/ analyzing and monitoring metrics to monitor performance, configuring and administering MapR security. Understanding and working with M7- Native storage for MapR tables. Cluster configuration and tuning for optimum performance. Cluster upgrade and integration with other setups: Upgrading software version of MapR and types of upgrade. Configuring Mapr cluster to access HDFS cluster. Setting up MapR cluster on Amazon Elastic Mapreduce. All the above topics include Demonstrations and practice sessions for learners to have hands on experience of the technology.
hadoopforprojectmgrs Hadoop for Project Managers 14 godz. As more and more software and IT projects migrate from local processing and data management to distributed processing and big data storage, Project Managers are finding the need to upgrade their knowledge and skills to grasp the concepts and practices relevant to Big Data projects and opportunities. This course introduces Project Managers to the most popular Big Data processing framework: Hadoop.   In this instructor-led training, participants will learn the core components of the Hadoop ecosystem and how these technologies can used to solve large-scale problems. In learning these foundations, participants will also improve their ability to communicate with the developers and implementers of these systems as well as the data scientists and analysts that many IT projects involve. Audience Project Managers wishing to implement Hadoop into their existing development or IT infrastructure Project Managers needing to communicate with cross-functional teams that include big data engineers, data scientists and business analysts Format of the course Part lecture, part discussion, exercises and heavy hands-on practice Introduction     Why and how project teams adopt Hadoop.     How it all started     The Project Manager's role in Hadoop projects Understanding Hadoop's architecture and key concepts     HDFS     MapReduce     Other pieces of the Hadoop ecosystem What constitutes Big Data? Different approaches to storing Big Data HDFS (Hadoop Distributed File System) as the foundation How Big Data is processed     The power of distributed processing Processing data with Map Reduce     How data is picked apart step by step The role of clustering in large-scale distributed processing     Architectural overview     Clustering approaches Clustering your data and processes with YARN The role of non-relational database in Big Data storage Working with Hadoop's non-relational database: HBase Data warehousing architectural overview Managing your data warehouse with Hive Running Hadoop from shell-scripts Working with Hadoop Streaming Other Hadoop tools and utilities Getting started on a Hadoop project     Demystifying complexity Migrating an existing project to Hadoop     Infrastructure considerations     Scaling beyond your allocated resources Hadoop project stakeholders and their toolkits     Developers, data scientists, business analysts and project managers Hadoop as a foundation for new technologies and approaches Closing remarks
68780 Apache Spark 14 godz. Why Spark? Problems with Traditional Large-Scale Systems Introducing Spark Spark Basics What is Apache Spark? Using the Spark Shell Resilient Distributed Datasets (RDDs) Functional Programming with Spark Working with RDDs RDD Operations Key-Value Pair RDDs MapReduce and Pair RDD Operations The Hadoop Distributed File System Why HDFS? HDFS Architecture Using HDFS Running Spark on a Cluster Overview A Spark Standalone Cluster The Spark Standalone Web UI Parallel Programming with Spark RDD Partitions and HDFS Data Locality Working With Partitions Executing Parallel Operations Caching and Persistence RDD Lineage Caching Overview Distributed Persistence Writing Spark Applications Spark Applications vs. Spark Shell Creating the SparkContext Configuring Spark Properties Building and Running a Spark Application Logging Spark, Hadoop, and the Enterprise Data Center Overview Spark and the Hadoop Ecosystem Spark and MapReduce Spark Streaming Spark Streaming Overview Example: Streaming Word Count Other Streaming Operations Sliding Window Operations Developing Spark Streaming Applications Common Spark Algorithms Iterative Algorithms Graph Analysis Machine Learning Improving Spark Performance Shared Variables: Broadcast Variables Shared Variables: Accumulators Common Performance Issues
storm Apache Storm 28 godz. Apache Storm is a distributed, real-time computation engine used for enabling real-time business intelligence. It does so by enabling applications to reliably process unbounded streams of data (a.k.a. stream processing). "Storm is for real-time processing what Hadoop is for batch processing!" In this instructor-led live training, participants will learn how to install and configure Apache Storm, then develop and deploy an Apache Storm application for processing big data in real-time. Some of the topics included in this training include: Apache Storm in the context of Hadoop Working with unbounded data Continuous computation Real-time analytics Distributed RPC and ETL processing Request this course now! Audience Software and ETL developers Mainframe professionals Data scientists Big data analysts Hadoop professionals Format of the course     Part lecture, part discussion, exercises and heavy hands-on practice Request a customized course outline for this training!
68736 Hadoop for Developers 14 godz. Introduction What is Hadoop? What does it do? How does it do it? The Motivation for Hadoop Problems with Traditional Large-Scale Systems Introducing Hadoop Hadoopable Problems Hadoop: Basic Concepts and HDFS The Hadoop Project and Hadoop Components The Hadoop Distributed File System Introduction to MapReduce MapReduce Overview Example: WordCount Mappers Reducers Hadoop Clusters and the Hadoop Ecosystem Hadoop Cluster Overview Hadoop Jobs and Tasks Other Hadoop Ecosystem Components Writing a MapReduce Program in Java Basic MapReduce API Concepts Writing MapReduce Drivers, Mappers, and Reducers in Java Speeding Up Hadoop Development by Using Eclipse Differences Between the Old and New MapReduce APIs Writing a MapReduce Program Using Streaming Writing Mappers and Reducers with the Streaming API Unit Testing MapReduce Programs Unit Testing The JUnit and MRUnit Testing Frameworks Writing Unit Tests with MRUnit Running Unit Tests Delving Deeper into the Hadoop API Using the ToolRunner Class Setting Up and Tearing Down Mappers and Reducers Decreasing the Amount of Intermediate Data with Combiners Accessing HDFS Programmatically Using The Distributed Cache Using the Hadoop API’s Library of Mappers, Reducers, and Partitioners Practical Development Tips and Techniques Strategies for Debugging MapReduce Code Testing MapReduce Code Locally by Using LocalJobRunner Writing and Viewing Log Files Retrieving Job Information with Counters Reusing Objects Creating Map-Only MapReduce Jobs Partitioners and Reducers How Partitioners and Reducers Work Together Determining the Optimal Number of Reducers for a Job Writing Customer Partitioners Data Input and Output Creating Custom Writable and Writable-Comparable Implementations Saving Binary Data Using SequenceFile and Avro Data Files Issues to Consider When Using File Compression Implementing Custom InputFormats and OutputFormats Common MapReduce Algorithms Sorting and Searching Large Data Sets Indexing Data Computing Term Frequency — Inverse Document Frequency Calculating Word Co-Occurrence Performing Secondary Sort Joining Data Sets in MapReduce Jobs Writing a Map-Side Join Writing a Reduce-Side Join Integrating Hadoop into the Enterprise Workflow Integrating Hadoop into an Existing Enterprise Loading Data from an RDBMS into HDFS by Using Sqoop Managing Real-Time Data Using Flume Accessing HDFS from Legacy Systems with FuseDFS and HttpFS An Introduction to Hive, Imapala, and Pig The Motivation for Hive, Impala, and Pig Hive Overview Impala Overview Pig Overview Choosing Between Hive, Impala, and Pig An Introduction to Oozie Introduction to Oozie Creating Oozie Workflows
kylin Apache Kylin: From classic OLAP to real-time data warehouse 14 godz. Apache Kylin is an extreme, distributed analytics engine for big data. In this instructor-led live training, participants will learn how to use Apache Kylin to set up a real-time data warehouse. By the end of this training, participants will be able to: Consume real-time streaming data using Kylin Utilize Apache Kylin's powerful features, including snowflake schema support, a rich SQL interface, spark cubing and subsecond query latency Note We use the latest version of Kylin (as of this writing, Apache Kylin v2.0) Audience Big data engineers Big Data analysts Format of the course Part lecture, part discussion, exercises and heavy hands-on practice To request a customized course outline for this training, please contact us.
hadoopdeva Advanced Hadoop for Developers 21 godz. Apache Hadoop is one of the most popular frameworks for processing Big Data on clusters of servers. This course delves into data management in HDFS, advanced Pig, Hive, and HBase.  These advanced programming techniques will be beneficial to experienced Hadoop developers. Audience: developers Duration: three days Format: lectures (50%) and hands-on labs (50%).   Section 1: Data Management in HDFS Various Data Formats (JSON / Avro / Parquet) Compression Schemes Data Masking Labs : Analyzing different data formats;  enabling compression Section 2: Advanced Pig User-defined Functions Introduction to Pig Libraries (ElephantBird / Data-Fu) Loading Complex Structured Data using Pig Pig Tuning Labs : advanced pig scripting, parsing complex data types Section 3 : Advanced Hive User-defined Functions Compressed Tables Hive Performance Tuning Labs : creating compressed tables, evaluating table formats and configuration Section 4 : Advanced HBase Advanced Schema Modelling Compression Bulk Data Ingest Wide-table / Tall-table comparison HBase and Pig HBase and Hive HBase Performance Tuning Labs : tuning HBase; accessing HBase data from Pig & Hive; Using Phoenix for data modeling
ambari Apache Ambari: Efficiently manage Hadoop clusters 21 godz. Apache Ambari is an open-source management platform for provisioning, managing, monitoring and securing Apache Hadoop clusters. In this instructor-led live training participants will learn the management tools and practices provided by Ambari to successfully manage Hadoop clusters. By the end of this training, participants will be able to: Set up a live Big Data cluster using Ambari Apply Ambari's advanced features and functionalities to various use cases Seamlessly add and remove nodes as needed Improve a Hadoop cluster's performance through tuning and tweaking Audience DevOps System Administrators DBAs Hadoop testing professionals Format of the course Part lecture, part discussion, exercises and heavy hands-on practice To request a customized course outline for this training, please contact us.
hadoopdev Hadoop for Developers (4 days) 28 godz. Apache Hadoop is the most popular framework for processing Big Data on clusters of servers. This course will introduce a developer to various components (HDFS, MapReduce, Pig, Hive and HBase) Hadoop ecosystem.   Section 1: Introduction to Hadoop hadoop history, concepts eco system distributions high level architecture hadoop myths hadoop challenges hardware / software lab : first look at Hadoop Section 2: HDFS Design and architecture concepts (horizontal scaling, replication, data locality, rack awareness) Daemons : Namenode, Secondary namenode, Data node communications / heart-beats data integrity read / write path Namenode High Availability (HA), Federation labs : Interacting with HDFS Section 3 : Map Reduce concepts and architecture daemons (MRV1) : jobtracker / tasktracker phases : driver, mapper, shuffle/sort, reducer Map Reduce Version 1 and Version 2 (YARN) Internals of Map Reduce Introduction to Java Map Reduce program labs : Running a sample MapReduce program Section 4 : Pig pig vs java map reduce pig job flow pig latin language ETL with Pig Transformations & Joins User defined functions (UDF) labs : writing Pig scripts to analyze data Section 5: Hive architecture and design data types SQL support in Hive Creating Hive tables and querying partitions joins text processing labs : various labs on processing data with Hive Section 6: HBase concepts and architecture hbase vs RDBMS vs cassandra HBase Java API Time series data on HBase schema design labs : Interacting with HBase using shell;   programming in HBase Java API ; Schema design exercise
hadoopba Hadoop for Business Analysts 21 godz. Apache Hadoop is the most popular framework for processing Big Data. Hadoop provides rich and deep analytics capability, and it is making in-roads in to tradional BI analytics world. This course will introduce an analyst to the core components of Hadoop eco system and its analytics Audience Business Analysts Duration three days Format Lectures and hands on labs. Section 1: Introduction to Hadoop hadoop history, concepts eco system distributions high level architecture hadoop myths hadoop challenges hardware / software Labs : first look at Hadoop Section 2: HDFS Overview concepts (horizontal scaling, replication, data locality, rack awareness) architecture (Namenode, Secondary namenode, Data node) data integrity future of HDFS : Namenode HA, Federation labs : Interacting with HDFS Section 3 : Map Reduce Overview mapreduce concepts daemons : jobtracker / tasktracker phases : driver, mapper, shuffle/sort, reducer Thinking in map reduce Future of mapreduce (yarn) labs : Running a Map Reduce program Section 4 : Pig pig vs java map reduce pig latin language user defined functions understanding pig job flow basic data analysis with Pig complex data analysis with Pig multi datasets with Pig advanced concepts lab : writing pig scripts to analyze / transform data Section 5: Hive hive concepts architecture SQL support in Hive data types table creation and queries Hive data management partitions & joins text analytics labs (multiple) : creating Hive tables and running queries, joins , using partitions, using text analytics functions Section 6: BI Tools for Hadoop BI tools and Hadoop Overview of current BI tools landscape Choosing the best tool for the job

Najbliższe szkolenia

SzkolenieData KursuCena szkolenia [Zdalne / Stacjonarne]
Hadoop for Data Analysts - Kielce, ul. Warszawska 19wt., 2017-10-10 09:006000PLN / 3400PLN
Hadoop for Developers (2 days) - Szczecin, ul. Sienna 9wt., 2017-10-10 09:0018260PLN / 6033PLN
Big Data Hadoop Analyst Training - Częstochowa, ul. Wały Dwernickiego 117/121pon., 2017-10-16 09:0015000PLN / 5945PLN
Hadoop for Developers (4 days) - Warszawa, ul. Złota 3/11pon., 2017-10-16 09:0034010PLN / 11106PLN

Other regions

Szkolenie Hadoop, Hadoop boot camp, Szkolenia Zdalne Hadoop, szkolenie wieczorowe Hadoop, szkolenie weekendowe Hadoop , edukacja zdalna Hadoop,Kurs Hadoop, kurs zdalny Hadoop, lekcje UML, kurs online Hadoop, instruktor Hadoop, wykładowca Hadoop , e-learning Hadoop, Trener Hadoop, nauczanie wirtualne Hadoop, nauka przez internet Hadoop

Kursy w promocyjnej cenie

Szkolenie Miejscowość Data Kursu Cena szkolenia [Zdalne / Stacjonarne]
Visual Basic for Applications (VBA) w Excel - poziom zaawansowany Warszawa, ul. Złota 3/11 pon., 2017-09-25 09:00 3069PLN / 1623PLN
Programowanie w języku C++ Wrocław, ul.Ludwika Rydygiera 2a/22 pon., 2017-09-25 09:00 5445PLN / 2815PLN
Java Performance Tuning Wrocław, ul.Ludwika Rydygiera 2a/22 pon., 2017-09-25 09:00 9801PLN / 3000PLN
Tworzenie i zarządzanie stronami WWW Poznań, Garbary 100/63 pon., 2017-09-25 09:00 5841PLN / 2298PLN
Techniki DTP (InDesign, Photoshop, Illustrator, Acrobat) Katowice ul. Opolska 22 pon., 2017-09-25 09:00 5940PLN / 3730PLN
SQL in Microsoft Access Kraków, ul. Rzemieślnicza 1 wt., 2017-09-26 09:00 10266PLN / 3911PLN
Certyfikacja OCUP2 UML 2.5 - Przygotowanie do egzaminu OCUP2 Foundation Kraków, ul. Rzemieślnicza 1 śr., 2017-09-27 09:00 6930PLN / 3510PLN
Visual Basic for Applications (VBA) w Excel dla analityków Poznań, Garbary 100/63 śr., 2017-09-27 09:00 2970PLN / 1590PLN
Adobe InDesign Katowice ul. Opolska 22 czw., 2017-09-28 09:00 1881PLN / 1327PLN
Wzorce projektowe w C# Rzeszów, Plac Wolności 13 czw., 2017-09-28 09:00 3861PLN / 2331PLN
Statistical and Econometric Modelling Warszawa, ul. Złota 3/11 pon., 2017-10-02 09:00 20483PLN / 6807PLN
Tworzenie aplikacji na platformie Android Łódź, ul. Tatrzańska 11 pon., 2017-10-02 09:00 4455PLN / 3128PLN
Efektywna komunikacja interpersonalna z elementami asertywności Kraków, ul. Rzemieślnicza 1 wt., 2017-10-03 09:00 5148PLN / 1830PLN
Web Development with Symfony3 Kielce, ul. Warszawska 19 pon., 2017-10-09 09:00 5554PLN / 2483PLN
Visual Basic for Applications (VBA) w Excel - wstęp do programowania Szczecin, ul. Sienna 9 pon., 2017-10-09 09:00 3564PLN / 1891PLN
Analiza biznesowa i systemowa z użyciem notacji UML - warsztat praktyczny dla PO w metodyce Scrum Łódź, ul. Tatrzańska 11 wt., 2017-10-10 09:00 7722PLN / 3474PLN
Access - podstawy Szczecin, ul. Sienna 9 wt., 2017-10-10 09:00 3465PLN / 1550PLN
MS-40361 Software Development Fundamentals MTA Exam 98-361 Gdynia, ul. Ejsmonda 2 śr., 2017-10-11 09:00 6138PLN / 2610PLN
UML Analysis and Design Kraków, ul. Rzemieślnicza 1 śr., 2017-10-11 09:00 8910PLN / 4170PLN
PostgreSQL for Administrators Gdynia, ul. Ejsmonda 2 śr., 2017-10-11 09:00 12326PLN / 4235PLN
Agile w projektach zdalnych Katowice ul. Opolska 22 pon., 2017-10-16 09:00 5049PLN / 1962PLN
Programowanie w języku C++ Bielsko-Biała, Al. Armii Krajowej 220 pon., 2017-10-16 09:00 5445PLN / 3565PLN
Programowanie w języku C++ Łódź, ul. Tatrzańska 11 pon., 2017-10-16 09:00 5445PLN / 3315PLN
PostgreSQL Administration Łódź, ul. Tatrzańska 11 wt., 2017-10-17 09:00 7821PLN / 3807PLN
Programowanie w C# Wrocław, ul.Ludwika Rydygiera 2a/22 śr., 2017-10-18 09:00 4851PLN / 1870PLN
Trening radzenie sobie ze stresem Gdynia, ul. Ejsmonda 2 śr., 2017-10-18 09:00 5148PLN / 1530PLN
Business Analysis Kraków, ul. Rzemieślnicza 1 śr., 2017-10-18 09:00 7722PLN / 3774PLN
Nginx konfiguracja i Administracja Wrocław, ul.Ludwika Rydygiera 2a/22 śr., 2017-10-18 09:00 6930PLN / 2700PLN
Projektowanie stron na urządzenia mobilne Kielce, ul. Warszawska 19 czw., 2017-10-19 09:00 2624PLN / 1305PLN
Adobe InDesign Wrocław, ul.Ludwika Rydygiera 2a/22 pon., 2017-10-23 09:00 1881PLN / 1027PLN
Adobe Premiere Pro Gdynia, ul. Ejsmonda 2 pon., 2017-10-23 09:00 3960PLN / 2480PLN
Administracja systemu Linux Gdynia, ul. Ejsmonda 2 wt., 2017-10-24 09:00 4950PLN / 3225PLN
Node.js Olsztyn, ul. Kajki 3/1 czw., 2017-10-26 09:00 3861PLN / 2431PLN
Microsoft Office Excel - efektywna praca z arkuszem Warszawa, ul. Złota 3/11 czw., 2017-10-26 09:00 2475PLN / 1225PLN
SQL Advanced in MySQL Warszawa, ul. Złota 3/11 czw., 2017-11-02 09:00 1881PLN / 1141PLN
Microsoft Office Excel - analiza statystyczna Warszawa, ul. Złota 3/11 czw., 2017-11-02 09:00 2673PLN / 1291PLN
Android - Podstawy Wrocław, ul.Ludwika Rydygiera 2a/22 pon., 2017-11-06 09:00 9801PLN / 4180PLN
Java Spring Wrocław, ul.Ludwika Rydygiera 2a/22 pon., 2017-11-06 09:00 14414PLN / 5970PLN
Kontrola jakości i ciągła integracja Wrocław, ul.Ludwika Rydygiera 2a/22 wt., 2017-11-07 09:00 2673PLN / 1737PLN
Nagios Core Gdańsk, ul. Powstańców Warszawskich 45 pon., 2017-11-13 09:00 13919PLN / 4968PLN
Oracle 11g - Analiza danych - warsztaty Gdynia, ul. Ejsmonda 2 pon., 2017-11-13 09:00 9900PLN / 4664PLN
ADO.NET 4.0 Development Warszawa, ul. Złota 3/11 wt., 2017-11-14 09:00 20176PLN / 6914PLN

Newsletter z promocjami

Zapisz się na nasz newsletter i otrzymuj informacje o aktualnych zniżkach na kursy otwarte.
Szanujemy Twoją prywatność, dlatego Twój e-mail będzie wykorzystywany jedynie w celu wysyłki naszego newslettera, nie będzie udostępniony ani sprzedany osobom trzecim.
W dowolnej chwili możesz zmienić swoje preferencje co do otrzymywanego newslettera bądź całkowicie się z niego wypisać.

Zaufali nam