Snowflake pysparkJobs
Programming: PySpark & JavaScript User should be able to input the python source code first, then the app will do the documentation of the code and let user save it (like the documentation of the function and class), and also will be able to see dependency between the classes and the source code metrics. In this project, it need to create an app and the app can let user (client) put inside/upload a python source code , and it will generate a documentation of the uploaded code (like list of function and class diagram). The output must include: All the class name and what's inside the class -class diagram to show the relationship between the class / dependency between the classes -all the function in the code (like an explanation of all the function).
Ontology Based Program for Python Programming Environment
Job Title: Freelance PowerPoint Presentation Designer - Snowflake Tutorial Job Description: We are seeking a talented and experienced freelance PowerPoint presentation designer who can create a visually engaging and animated PowerPoint presentation for our Snowflake tutorial. Snowflake is a complex platform, and we want to transform our instructional content into a dynamic and educational presentation. Key Responsibilities: Content Adaptation: Take our existing tutorial content and transform it into a compelling PowerPoint presentation. Design and Animation: Create visually stunning slides with animations and transitions that enhance the learning experience. Custom Graphics: Incorporate custom graphics, icons, and illustrations to illustrate key concepts. Consi...
I am looking for a freelancer who can convert my pandas code to pyspark. The dataset is small, less than 1 GB in size. I don't have specific transformations or operations in mind, but I am open to suggestions. It is important that the pyspark code is optimized for performance. Ideal skills and experience: - Strong knowledge and experience in both pandas and pyspark - Ability to understand and convert pandas code to pyspark - Familiarity with optimizing pyspark code for performance The output should be same here in python with pandas and the code with pyspark. Please Add the print statements to verify. Versions ----------------- spark - 2.4.7.7 Anaconda3-2018
...reasons: it could be more creative and having it as all one color will cost less for our print/packing material. What we need: 1) Must be all Deep Navy Blue (PMS 289 or RGB 12, 35, 64) or, if adding shadow, lighting, and other effects, must look nice when converted to that single Dark Navy Blue color for printing purposes. 2) Must incorporate our company name | Snowflake Designs | AND our specific Snowflake, as we use the Snowflake as our brandmark in crystals on all our clothing. 3) Will more than likely need to fit in a square area (our product tags are die-cut, rounded snowflakes) though a rectangular (wider than tall) logo could work for our website/package/print, provided we could scale the rectangular logo elements to fit in the square tags later. What we...
I'll do your project as quickly as possible thanks for selecting me
Need Ontology Based Program for Python Programming Environment
Ontology Based Program for Python Programming Environment
Ontology Based Program for Python Programming Environment
Ontology Based Program for Python Programming Environment
Ontology Based Program for Python Programming Environment
Ontology Based Program for Python Programming Environment
...stylized illustration of two leaves – one representing a vibrant green leaf and the other resembling a delicate snowflake. These elements symbolize the dual nature of your services: lawn care and snow removal. Around the upper half of the circle, the text "TwinLeaf Lawns & Flakes" is elegantly written in a modern, clean font. The "TwinLeaf" portion can curve gracefully above the illustration, while "Lawns & Flakes" is situated below. Beneath the logo icon, include the tagline "Transforming Scapes, Whatever It Takes!" in a slightly smaller font to convey the dedication and commitment of your services. For the color palette, consider using shades of green and blue for the leaves and snowflake, respectively. These color...
"· Set up the security settings (allow read only for a few tables for web application) · Create a around 10 scheduled functions/stored procedures · Finish setting up files imports" Experience - (3-5)
...Avoid bright, neon, or clashing colors. Avoid overly cartoonish elements. 6. Deliverables: Logo in vector format (.AI or .EPS). High-resolution .PNG and .JPEG versions. A black and white version. A style guide detailing color codes, fonts, and how to use the logo in different contexts. 7. Inspiration and References: We appreciate the design principles of companies like [Gainsight, "Tableau", "Snowflake", etc.], where clarity and sophistication shine through. We would love for our logo to reflect similar principles while retaining its unique identity....
...and scope requirements for development. • Create Technical specifications and document development flow and metadata of reports and dashboard. • Created dynamic data models across various data entities using LookML • Develop new explores, views, and Native Derived Tables (NDT) to be utilized by dashboards. • Perform extensive unit testing on codes and content created in Looker using SQL Runner, Snowflake SQL, and Looker content validator. • Fix code and User Interface defects identified during business team demo and reviews, Quality Analysis, and User Acceptance Testing. • Move developed content between multiple tenets using CI/CD Pipelines. • Interface with external teams to understand and document reporting needs and move development work as q...
Hello, I am looking for a freelancer to develop a python and FastAPI based API for Snowflake. For starters the API should have the following API Calls: DatabaseConnectionStart DatabaseConnectionStop Insert CreateTable Update Fetch FetchAll Drop Delete Log In addition to this there should be additional functions for : Error Logging Call monitoring : what calls have been made historically The API should be robust and detailed. Testing and development will be handled on your end.
Need help on pyspark and databricks delta tables
...for a skilled data scientist to work on a project with me. Specifically, I'm looking for someone who can demonstrate proficiency in Python programming, experience with machine learning models, and abilities in data visualization. The data scientist will be working with categorical data and the project timeline is expected to last for a year (atleast). Must-Have Skill: 1)Strong proficiency in PySpark and Python, with a proven ability to develop robust and efficient code. 2)Experience with product development, including understanding, enhancing, and maintaining pre-existing codebases and algorithms. 3)Ability to write deployment-level code, ensuring software quality and scalability. 4)Excellent problem-solving skills and the ability to work on algorithmic preprocessing tasks....
Title: Support Required for DBA, MariaDB, DBeaver, and Windows Task Scheduler Overview: I am seeking a skilled DBA with expertise in MariaDB, DBeaver, and Windows Task Scheduler to provide support for my project. The primary focus will be on set up and configuration. Need to know how to pull the data from snowflake
...assist me with a Big Data Analytics and Data Visualisation project. The ideal candidate should have experience in regression analysis techniques and be proficient in using Tableau for data visualisation. Project Requirements: - Perform regression analysis on a dataset with medium size (1,000-10,000 records) - Utilize Tableau for data visualisation purposes -use one of the datasets from kaggle. use pyspark to analyze the dataset using algorithms and tableau to explore the data set to show the result of analysis. Create full report. Skills and Experience: - Strong knowledge and experience in regression analysis techniques - Proficiency in using Tableau for data visualisation - Familiarity with data analysis and visualization best practices - Ability to work with medium-sized dat...
I am looking for an experienced AWS data engineer who can assist me with Serverless Redshift and PySpark. I do not need help with setting up a system of automation, but I may require assistance with running analytics on the data. The ideal candidate should have experience with the following: - Serverless Redshift - PySpark Skills and experience required for this project: - Strong knowledge of AWS services, particularly Serverless Redshift and PySpark - Experience in data engineering and analytics - Familiarity with S3, Lambda, Boto3, and step functions would be a plus - Ability to work independently and efficiently - Excellent problem-solving and communication skills Working time = 8:30 PM EST to 10:30 PM EST (6 AM IST to 8 AM IST) Duration = 3 to 6 months
...logo based on these suggestions. The product is called "The CoolDude" Concept One: Smiling Snowflake: The main element of this logo is a snowflake character with a cheerful face. Imagine a six-pointed snowflake design with a twist – it's anthropomorphized with a pair of stylish sunglasses that add to its "cool" factor and a wide, friendly smile that radiates warmth. The snowflake character would be rendered in a shade of icy blue, communicating the product's cooling feature. The product's name, "The CoolDude", is positioned underneath in bold, modern typography. The color of the font could be a darker shade of blue or even black to provide an excellent contrast against the lighter blue snowflake, en...
need support for a ETL/SQL Developer. SKILLS: -SQL Server, open source database (MariaDB) -Just need to know how to pull data from Snowflake -Postgres no longer needed, wants strong ETL/ELT to keep continuity of service
...Compute Cloud (EC2), Simple Storage Service (S3), and Relational Database Service (RDS) and other services - The training should be at an intermediate level - The training needs to be completed within a specific timeline Ideal skills and experience for the job: - Strong knowledge and experience in AWS services, particularly EC2, S3, RDS, Lambda, ApiGateWay, IAM, Dynamodb, cloudWatch, Glue, EMR and Pyspark - Proficiency in Python programming language - Experience in providing training or teaching in AWS - Ability to explain complex concepts in a clear and concise manner - Strong communication and interpersonal skills If you have the necessary skills and experience, and can deliver intermediate level training on specific AWS services within a specific timeline, please reach out ...
need a snowflake developer who is proficient in writing queries, also need to have good experiance in informatica power center to read code. this project is to translate code from Informatica to Snowflake.
Quantori is a new company with a long history. We have over twenty years' experience in developing software for the pharmaceutical industry and driving advanced strategies in the world of Big Data revol...Azure) - Good written and spoken English skills (upper-intermediate or higher) Nice to have: - Knowledge of web-based frameworks (Flask, Django, FastAPI) - Knowledge of and experience in working with Kubernetes - Experience in working with cloud automation and IaC provisioning tools (Terraform, CloudFormation, etc.) - Experience with Data Engineering / ETL Pipelines (Apache Airflow, Pandas, PySpark, Hadoop, etc.) - Good understanding of application architecture principles We offer: - Competitive compensation - Remote work - Flexible working hours - A team with an excellent...
You will need to fill in the following pages: Technologies page (Cloud - AWS, Azure, GCP. Backend - Dotnet, Go, Databases - MSSQL, Postgres Redis, Mongo, CosmosDB, DynamoDb, BigQuery, Red Shift, Snowflake. Frontend - Angular, React, Vue.) Security page (Highest security standards based on Cloud services whichever cloud provider is required - AWS, Azure, GCP. Security scanning for vulnerabilities in dependencies and code. Apply standards of user data handling and regulations as needed) Portfolio page QA process page (Unit tests while writing code, Integration tests while deploying features, E2E tests when deploying to testing. Separating environments of development, qa and production) Services page (Mobile development services, website development services, system development se...
We are a snow industry oriented business, providing a range of customized solutions to resorts and the ski and snowboard community. We are looking for a logo that encapsulates our brand's values and vision to be the go-to for a r...resorts and the ski and snowboard community. We are looking for a logo that encapsulates our brand's values and vision to be the go-to for a range of snow industry related services. I'm looking for a designer to create a logo for us, I have a few ideas but am open to seeing mock-ups. For the logo, I would like to see the words "8CM" in capital letters, with a snowflake used as a full stop at the bottom of the M, or alternatively a snowflake in bold, next to the M. I'm open to seeing designs. Just to clarify, I am onl...
...proficiency in PySpark, Python, AWS Glue, crawler, SQL, as well as knowledge of SAP and CRM systems, will be instrumental in managing the pipelines between data lakes. Key Responsibilities: Review and assess the existing pipelines to ensure their effectiveness and efficiency. Set up robust data pipelines using AWS Glue, adhering to industry best practices and standards. Continuously modify and enhance existing pipelines to meet evolving business requirements. Collaborate with cross-functional teams to identify opportunities for optimizing data integration and transformation processes. Troubleshoot and resolve any pipeline issues or discrepancies in a timely manner. Perform data validation, quality assurance, and data integrity checks throughout the pipelines. Utilize PySpark...
I have a table in Snowflake. I would like a view with all the values from the original table plus a calculated field. Shouldnt take more than 10 mins one you have set up the table.
I have a table in Snowflake. I would like a view with all the values from the original table plus a calculated field. Shouldnt take more than 10 mins one you have set up the table.
Quantori is a new company with a long history. We have over twenty years' experience in developing software for the pharmaceutical industry and driving advanced strategies in the world of Big Data revol...Azure) - Good written and spoken English skills (upper-intermediate or higher) Nice to have: - Knowledge of web-based frameworks (Flask, Django, FastAPI) - Knowledge of and experience in working with Kubernetes - Experience in working with cloud automation and IaC provisioning tools (Terraform, CloudFormation, etc.) - Experience with Data Engineering / ETL Pipelines (Apache Airflow, Pandas, PySpark, Hadoop, etc.) - Good understanding of application architecture principles We offer: - Competitive compensation - Remote work - Flexible working hours - A team with an excellent...
We are an expanding IT company seeking skilled and experienced data engineering professionals to support our existing workforce. We are in search of candidates proficient in a wide range of data engineering technologies, including Python, AWS, Apache Kafka, Snowflake, and AWS Kinesis. The primary responsibility of the support personnel will be to assist our team in managing, optimizing, and scaling our clients' data infrastructure. Responsibilities: - Provide support and guidance on best practices in data engineering. - Collaborate with our team to develop and maintain data pipelines, ETL processes, and data storage solutions. - Optimize and scale data infrastructure to ensure high performance and reliability. - Identify and resolve data-related issues, and provide recommendat...
I am looking for a Python expert who can help me convert a function to handle nested JSON structures. The function should be able to handle JSON structures with N levels. You can view the spark function here which works with N levels. Your task is to create something similar without using Spark Libraries. https://colab.research.google.com/drive/1hFzts8ybV9xskfBoORCkZrbYaTQ9Kwm8#scrollTo=i9gl3VFatrrt Skills and Experience: - Strong proficiency in Python and JSON manipulation - Experience with handling nested JSON structures - Familiarity with working with JSON data in a tabular format (spreadsheet-like) The ideal candidate should have a solid understanding of JSON structures and be able to convert the function to handle nested JSON structures efficiently. They should also be experien...
Air conditioning services and maintenance company, would like slogan to read: Maintenance & servicing Have attached a file I would like that font, grey back round, gold font please. Would you be able to incorporate a snowflake as the degree sign or take the middle circle out current font and have a snowflake faded behind to one side, thanks in advance.
...offshore technical team Required Skills: ● 4+ years’ experience of Hands-on in data structures, AWS, spark, SQL and NoSQL Databases ● Strong software development skills in Pyspark ● Experience building and deploying cloud-based solutions at scale. ● Experience in developing Big Data solutions (migration, storage, processing) ● Experience in SQL and Query optimisation ● Ability to clearly communicate technical roadmap, challenges and mitigation ● Experience building and supporting large-scale systems in a production environment Technology Stack: ● Cloud Platforms – AWS ● Mandatory – High programming skill in Python and Pyspark, Hands-on experience with the AWS Redshift ● Nice to have - Experience in Bigdata Technologies such as Hive, Spark, Lambda, AWS Clo...
We want to provide Zendesk Data Sync (using Python script) to Snowflake and Reporting under PowerBI with Row Level Security to our customer. Each customer will using their snowflake account to store their Zendesk data as we dont want to pay customer usage from our pocket. Whenever we will onboard any customer that time, we will ask there zendesk API credential and Snowflake account access to setup our python code to push data. also we will setup PowerBI Predefined reports which will extract data from snowflake. We are thinking this product to provide to our customers.
We are seeking a talented Database Developer with expertise in JSON data processing and PySpark to join our team. The ideal candidate will play a crucial role in designing and developing a custom query builder for efficient JSON data processing using PySpark. This is a fantastic opportunity to work with cutting-edge technologies and contribute to the development of innovative data processing solutions. As a Database Developer, you will collaborate with cross-functional teams, including data scientists and analysts, to understand business requirements and translate them into efficient and scalable solutions. You will be responsible for designing and implementing data models and database schemas for optimal storage and retrieval of JSON data. Additionally, you will develop and...
ools: Airflow, Docker, Spark. Task: Using Airflow dags, build a pipeline based on distributed computation offered by Spark, but not Pyspark, and keep a log of the pipeline execution and Dockerize it. 1. Download the ETF and stock datasets from the primary dataset available at 2. Set up a data structure to retain all data from ETFs and stocks in the following columns. Symbol: string Security Name: string Date: string (YYYY-MM-DD) Open: float High: float Low: float Close: float Adj Close: float Volume: int Note: Do not change Adj Close to Adj_Close 3.1. Convert the resulting dataset into a structured format (Parquet). 3.2. Calculate the moving average of the trading volume (Volume) of 30 days per each stock and ETF, and retain
I am looking for someone who is familiar with both Spark and Airflow. The main goal of implementing Spark in Airflow for my project is to improve scheduling and automation. Tools: Airflow, Docker, Spark. Task: Using Airflow dags, build a pipeline based on distributed computation offered by Spark, but not Pyspark, and keep a log of the pipeline execution and Dockerize it. 1. Download the ETF and stock datasets from the primary dataset available at 2. Set up a data structure to retain all data from ETFs and stocks in the following columns. Symbol: string Security Name: string Date: string (YYYY-MM-DD) Open: float High: float Low: float Close: float Adj Close: float Volume: int Note: Do not change Adj Close to Adj_Close
We are Seeking a freelance with 6+ years of exp Skils Required : Any Cloud knowledge ( Azure, AWS, & Google cloud) - Data Bricks, Data Lake & Data Factory . also Pyspark or Scala , knowledge in ETL tools We are seeking an experienced Senior Data Engineer with experience in architecture, design, and development of highly scalable data integration and data engineering processes The Senior Consultant must have a strong understanding and experience with data & analytics solution architecture, including data warehousing, data lakes, ETL/ELT workload patterns, and related BI & analytics systems Strong in scripting languages like Python, Scala 6+ years hands-on experience with any Cloud platform Experience building on-prem data warehousing solutions. Experience with...
Cloud & Data Infrastructure Engineer Skills : Azure Infrastructure Foundation, Azure Event Hub, Azure IoT Hub, Azure Stream Analytics, Azure Data Lake Services, Python/Pyspark/Data Bricks ,Kubernetes, Azure DevOps Years of Experience : Min 4 years Do you have any suitable profiles for with same tech skill.
I am building a cloud based analytics software. The software pulls data from salesforce Hubspot, Fresh Sales, Snowflake and MixPanel and presents it in form of charts and graphs. I need a software developer to build it from figma design I have built. I want you to create a software like figma designs: First I want to sent data from Salesforce, Hubspot fresh sales, Snowflake, Mixpanel to AWS S3 bucket. Here you have to create some kind of scheduler to send data on regular basis. Send the daily data in the night time (Keep the clock according to user’s location, as some of my teams are in different timezone). Second make an API call to AWS lambda. This will process the data
We need Python, Snowflake, SQL experienced developer to work for 2 hours a day remotely. We will give 25 to 29k per month. Should know all those technologies and 6yrs experience
Quantori is a new company with a long history. We have over twenty years' experience in developing software for the pharmaceutical industry and driving advanced strategies in the world of Big Data revol...Azure) - Good written and spoken English skills (upper-intermediate or higher) Nice to have: - Knowledge of web-based frameworks (Flask, Django, FastAPI) - Knowledge of and experience in working with Kubernetes - Experience in working with cloud automation and IaC provisioning tools (Terraform, CloudFormation, etc.) - Experience with Data Engineering / ETL Pipelines (Apache Airflow, Pandas, PySpark, Hadoop, etc.) - Good understanding of application architecture principles We offer: - Competitive compensation - Remote work - Flexible working hours - A team with an excellent...
Looking for a skilled data engineer with at least 4 years of experience, who has worked with Snowflake before. The project involves (goal of project not provided) and requires someone who can efficiently manage and process a large amount of data. Ideal candidates should have experience with data migration, performance optimization, and building data pipelines.
Add history functionality to existing ETL process in pyspark. Need to account for racing conditions on primary key.
We need Python, Snowflake, SQL experienced developer to work for 2 hours a day remotely. We will give 25 to 29k per month. Should know all the technologies and 6yrs experience
We are seeking a skilled developer with expertise in Java Spring Boot and Python (specifically PySpark) to join our team. In this role, you will be responsible for integrating Python PySpark code within a Java Spring Boot application. You will work closely with cross-functional teams to understand requirements, design the integration architecture, and implement seamless communication between Java and Python components.