About The Position
AllCloud is a global professional services company providing organizations with tools for cloud enablement and transformation. As an AWS Premier Consulting Partner and an official AWS Authorized Training Partner (ATP)., AllCloud helps clients connect their front office and back office by building a new operating model that allows them to harness the benefits of both Salesforce and AWS. With over 12 years of experience and a portfolio with thousands of successful cloud deployments, AllCloud serves clients across the globe. AllCloud has offices in Israel, Europe, and North America.
AllCloud is searching for a Big-Data Cloud Engineer with strong experience across the entire Cloud Data stack. The ideal candidate will have extensive experience in data pipelines (ELT/ETL), data warehousing and dimensional modeling, and curation of data sets for Data Scientists and Business Intelligence users. Experience in building Data Lake and on-prem to Cloud Data Migration projects. This candidate will also have excellent problem-solving ability dealing with large volumes of data.
How You'll Make Your Mark:
- Design, Build and Operate the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, cloud migration tools, and ‘big data’ technologies.
- Optimize various RDBMS engines in the cloud and solve customers' security, performance, and operation problems.
- Design, Build, and Operate large, complex data lakes that meet functional / non-functional business requirements.
- Optimize various data types ingestion, storage, processing, and retrieval from near real-time events, and IoT, to unstructured data as images, audio, video and documents, and in between.
- Work with customers' and internal stakeholders including the Executive, Product, Data, Software Development, and Design teams to assist with data-related technical issues and support their data infrastructure and business needs.
- 5+ years of experience in a Data Engineer role in a cloud-native eco-system.
- Bachelor (Graduate preferred) degree in Computer Science, Mathematics, Informatics, Information Systems or another quantitative field.
- Working experience in AWS Glue ETL, Redshift and Redshift Spectrum
- Experience in implementing data pipelines for both streaming and batch integrations using tools/frameworks like Glue ETL, Lambda, Spark, Spark Streaming, etc.
- Working experience with the following technologies/tools: big data tools: Spark, ElasticSearch, Hadoop, Kafka, Kinesis etc.
- Experience in Relational and NoSQL databases, such as MySQL or Postgres and DynamoDB or Cassandra.
- ETL work experience in ETL tools such as Informatica, Matillion, FiveTran, DBT, Talend, etc.
- Functional and scripting languages: Python, Java, Scala, etc. Advanced SQL.
- Experience building and optimizing ‘big data’ pipelines, architectures, and data sets.
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ stores.
- Experience supporting and working with external customers in a dynamic environment.
- Experience in PowerBI / Tableau is preferred
Certifications That Would Help You To Be Successful:
- AWS solutions architect and/or AWS developer and/or AWS Data Analytics Specialty certification is preferred