...retrieval and manipulation tools for various data sources like: Rest/Soap APIs, Relational (MySQL) and No-SQL Databases (MongoDB), IOT data streams, Cloud-based storage, and HDFS. Strong foundation in Algorithms and Data Science theory. Strong verbal and written communication skills with other developers and business client Knowledge of Telecom and/or
This project will have two sets of APIs a) A set of APIs to create HIVE, HBASE and Impala structure creation based on inputs provided by users b) APIs to read HIVE, HBASE and Impala table and create the schema for the selected tables
Hi, I have 5 apps in the Android Developer Console and I want to pull all the statistics for them into a single spreadsheet...data is available on a month-by-month basis from 2012 to 2018. In total you will review about 2000 spreadsheets. You can do this manually, or if programmatically using the gsutil tool. I dont mind either way. Happy bidding
...Hands-on experience on Big data tools and frameworks. Experience on Hadoop ecosystem, integrating and implementing solutions using technologies like - Hive, Pig, Mapreduce, HDFS etc. 2. Should have a proficient understanding of distributed computing paradigm and realtime processing vs batch processing paradigm. 3. Should be proficient in designing
Urgent requirement - We have urgent requirement of Data engineering (Big Data, HDFS, Hive, SQL, JAVA/Python, Spark,) Position-1 location- Gurugram Bangalore Duration- 6 months to 1 year Year of experience- 4-5 years Skills required:- Big Data, HDFS, Hive, SQL, JAVA/Python, Spark. Note- Please don't reach out us for remote [log ind for at se URL] it is mentioned its
I’m looking for a Spark/Hadoop consultant to establish connections from a Spark server to retrive data from HDFS with Kerboros authentication. Please let me know if you are available to chat over the phone today. I need a freelance consulting for few hours depending on scope and time you would suggest.
...solução Cloudera para autenticação de usuários e integração ao sistema de permissões do Sentry; Implementação de PCI-DSS sobre plataforma Cloudera ( KERBEROS, SSL, CRIPTOGRAFIA HDFS e os componente do Ecosistema Hadoop); Implementação de Auditoria plataforma Cloudera. Formação: Em T.I (Si...
I need you to develop some software for me. I would like ...software for me. I would like this software to be developed for Linux . I have one small task The goal of task is I need code for listing all corrupted gzip file names in a HDFS directory Like gunzip -t [log ind for at se URL] in local Budget 30 dollars I will highly appreciate if you write in scala
Hadoop Notes for my [log ind for at se URL] should require [log ind for at se URL] 2.HDFS [log ind for at se URL] Reduce [log ind for at se URL] [log ind for at se URL] Important Note:Require Notes in a detailed manner including the commands used for all the things stated above
...Bring the course content/ppt/exercises 3. The course must cover all (but not limited to ) the following topics: 3.1 Introduction to Big data & Hadoop 3.2 Hadoop Architecture & HDFS 3.3 Hadoop mapreduce Framework 3.4 Advanced Hadoop mapreduce Framework 3.5 Apache Pig 3.6 Apache Hive 3.7 HBase 3.8 Advanced topics of 3.5,3.6,3.7 3.9 Distributed data with
I have a spark application which pulls information from a HDFS file system and insert data to HBase or vise versa. I need need a docker environment where i can test my spark application. The docker environment can be either single standalone node with java, python, hadoop, spark and hbase running in it or a cluster running spark and hbase on different
...with middleware technologies including Docker, Mesos/DCOS, Kubernetes, Marathon, Spark and Cloud services. • Experience with Big Data Analytics, Hadoop, Kafka, Flume, Yarn, HDFS, Spark, Hive • Development experience in REST API development, Git/Github, Test Driven Development • Desire and skills to explore and master new open source tools and technologies
...and develop meaningful relationships to achieve common goals 2+ years? experience designing and developing in Python and Spark. 2+ years? experience in Hadoop Platform (Hive, HDFS, Impala) 3+ years? experience with Unix shell scripting, SQL and SAS 2+ years? experience with Agile methodology When you work at JPMorgan Chase & Co., you?re not just working
...experience in writing HDFS & Pig Latin commands. - Develop complex queries using HIVE. - Work on new developments on Hadoop using hive, hbase, Impala, flume, MapReduce, HDFS, Oozie, hive, Kafka, sqoop, java and shell scripts. - Develop data pipeline using Flume, Sqoop, Pig and Java map reduce to ingest claim data and financial histories into HDFS for analysis
hi I need to take data from Db and display records on [log ind for at se URL] data is very huge ,so i need to implement using big data.I want to use hive,impala,spark,HDFS,mapreduce to achieve this. The records can be drilled down to further to show more results on screen. For eg: Hyundai 1232 5767 vrerere 12132 elantra Accent
Need a python script to connect to Hive. 1) Need different implementations using pyhive, pyhs2 ,ThriftHive and pandas 2) Hive is on hdfs and the hdfs servers are kerberos enabled(ssl/sasl).should use principle,keystores to connect hive. 3) can use the below information [log ind for at se URL]
...creative, energetic developers. Desired Experience – MEAN stack, Angular 5, Rest, [log ind for at se URL], Tensorflow, Mongo, Graph expertise (neo4j preferably, but any is fine), SOLR, HDFS Super amazing pluses - if they've ever worked with Tensorflow, Sphinx,...
I have an application in which user selects a folder from hdfs, and the application writes the results in hdfs/output/directory. so we need to write code in java for checking permissions of output directory before writing the results in hdfs/output/directory.
Have to crawl the data and store it to HDFS using Apache nutch with the integration of Hadoop!
find any dataset Twitter , e-commerce , e-Health ... extract and store the data in Hadoop process the data in Hadoop , restructure and ...dataset Twitter , e-commerce , e-Health ... extract and store the data in Hadoop process the data in Hadoop , restructure and filter do sentiment analysis use hadoop tool HDFS, MapReduce or any other tool
This is the pure text based search engine kind of application . Basically the this applicat...index file is searched for that word and produces the output in the ranking order. We have completed all the project using local file system and we want to implement them in hdfs. [log ind for at se URL]
This is the pure text based search engine kind of application. Basically the this application accepts PDFs also and converts them into text files and generate t...index file is searched for that word and produces the output in the ranking order. We have completed all the project using local file system and we want to implement them in HDFS file system
Create a simple Java Oozie Application that reads from HDFS and write to Cassandra. It simply reads a file from HDFS and write to a cassandra table. It doesn't matter which data. Once you write this sample application you will guide me through running it.
...Technologies are primarily Spark, Spark Streaming and SQL, Kafka. Our project mainly deals with real-time data processing using Kafka with Spark Currently, we are using Vertica and HDFS for data storage and migrating to AWS S3. So, AWS experience is required not less than 1 year. Totally coding is in Scala. So, Scala is main. Knowledge of Akka actors, Akka
I need you to develop some software for me. I would like this software to be developed.
...Basically the task is to access data from hdfs in .packet form, query through the data for relevant UIDs, fetch some specific fields in those UIDs, process parameters by performing some mathematical computations on those fields for those specific UIDs and store the processed values in a separate .packet file on hdfs. Further aggregation needs to be performed
Write a ETL process using Java, Spark & HDFS. Copy the input file to HDFS Read the input file from HDFS using Java & Spark Perform below function on the dataset Average_Calculation() For each stock , calculate the average trading volume for each month, average trading price for each month. so for each stock , for each month , calculate
This is a simple POC to show how data standardisation/quality can be performed using Hive. We have one file (mostly fixed width) with ~100 fields available in HDFS. We need to read the file and apply rules to standardise the data on ~10 fields. Please refer attached doc for more details.
Looking for an instructor with big data knowledge. Please don't bid for the project until you can work on the following. Serious inquiries only and nothing negotiable. 1. You must be able to teach in CST time 2. Must be committing for long time 3. Price are negotiable after few months of work 5. Must know the following Apache Spark, Map/Reduce, Java Libraries 6. All you have to do...
Need someone to work on HDFS project. Java Codes will be provided. You will only need to make a mapper (in python) and reducer (in python) for processing data. All Mappers and Reducers must work with Java Codes provided. Must understand HDFS, Linux, Java and Python. Must be a very good programmer and love Big Data. More bigger projects will
Please find details about training /consulting requirement Kindly find theContents: Read Kafka data and put into HDFS using Scala and spark streaming Read Mysql data and put into HDFS using spark and Scala streaming Hadoop Production Resource Allocation Druid Oozie scheduler and The JAVA API/framework integration with the Hadoop
We use nginx and nginx-vod-module([log ind for at se URL]) for our Video Streaming Service. We use Hadoop HDFS for storage. (HDFS -> Jetty Web Server -> nginx(nginx-vod module remote mode) -> user) HLS video streaming work fine but performance and network leak issue present. We want fix this by nginx-vod-module and nginx
Exp between 6 to 10 yrs experience JOB DESCRIPTION in BRIEF: Develop in Big Data architecture, Hadoop stack including HDFS cluster, MapReduce, Pig,Hive, Spark and Yarn resource Management Hands on Programming experience in any of the programming language like(Python/Scala/R/Java) Assist and support proof of concepts as Big Data technology evolves
I'm looking for a tutor or a hadoop admin who can teach me basics about -- Hadoop( Hdfs, MapReduce, Hive, Hue, Yarn, Spark, Kafka, Cassandra, Mongo, Linux, DBA, Java, Networking, Active Directory, TLS, Encryption) . I don't need very deep insights I just need outline and someone who can answer patiently all my questions.
This project is to access hadoop services (hdfs,hive,hbase, yarn and impala) from external java program (program runs from outside the hadoop cluster) and automate tasks. Then integrate this project with other applications.