Will be ready to learn and explore new ideas, processes, methodologies and leading edge technologies. ), Focuses on the overall stability and availability of the BigData platform and the associated interfaces and transport protocols, Researches, manages and coordinates resolution of complex issues through root cause analysis as appropriate, Establishes and maintains productive relationships with technical leads of key operational sources systems providing data to BigData plaform, Establishes and maintains productive relationships with technical leads of key infrastructure support areas, such as system/Infra engineers, Ensure adherence to established problem / incident management, change management and other internal IT processes, Responsible for communication related to any day to day issues and problem resolutions. Infrastructure as Code (Puppet / Ansible / Chef / Salt), Data security and privacy (privacy-preserving data mining, data security, data encryption), Act as focal point in determining and making the case for applications to move into the Big data platform, Hands on experience leading large-scale global data warehousing and analytics projects, Ability to communicate objectives, plans, status and results clearly, focusing on critical few key points, Participate in installation, configuration, and troubleshooting of Hadoop platform including hardware, and software, Plan, test and execute upgrades involving Hadoop components; Assure Hadoop platform stability and security, Help design, document, and implement administrative procedures, security model, backup, and failover/recovery operations for Hadoop platform, Act as a point of contact with our vendors; oversee vendor activities related to support agreements, Research, analyze, and evaluate software products for use in the Hadoop platform, Provide IT and business partners consultation on using the Hadoop platform effectively, Build, leverage, and maintain effective relationships across technical and business community, Participates and evaluates systems specifications regarding customer requirements, transforming business specifications into cost-effective, technically correct database solutions, Prioritizes work and assists in managing projects within established budget objectives and customer priorities, Supports a distinct business unit or several smaller functions, Responsibilities are assigned with some latitude for setting priorities and decision-making using established policies and procedures, Results are reviewed with next level manager for clarification and direction before proceeding, 3 to 5 years of Hadoop administration experience, preferably using Cloudera, 3+ years of experience on Linux, preferably RedHat/SUSE, 1+ years of experience creating map reduce jobs and ETL jobs in Hadoop, preferably using Cloudera, Experience sizing and scaling clusters, adding and removing nodes, provisioning resources for jobs, job maintenance and scheduling, Familiarity with Tableau, SAP HANA or SAP BusinessObjects, Proven experience as a Hadoop Developer/Analyst in Business Intelligence and Data management production support space is needed, Strong communication, technology awareness and capability to interact work with senior technology leaders is a must, Strong knowledge and working experience in Linux , Java , Hive, Working knowledge in enterprise Datawarehouse, Should have dealt with various Data sources, Cloud enablement – Implementing Amazon Web Services (AWS), BI & Data Analytics – Implementing BI and analytics and utilizing cloud services, 5+ years of Experience testing applications on Hadoop products, 5+ years of Experience in setting up Hadoop test environments, Expertise in developing automated tests for Web, SOA/WS, DW/ETL, JAVA backend applications, Expertise in automation tools: Selenium (primary), HP UFT, Expertise in test frameworks: Cucumber, JUnit, Mockito, Expertise in programming languages: JAVA (primary), JavaScript, Proficiency with build tools: SVN, Crucible, Maven, Jenkins, Experience with project management tools: Rally, JIRA, HP ALM, Experience in developing and maintaining Hadoop clusters (Hortonworks, Apache, or Cloudera), Experience with Linux patching and support (Red Hat / CentOS preferred), Experience upgrading and supporting Apache Open source tools, Experience with LDAP, Kerberos and other authentication mechanisms, Experience with HDFS, Yarn, HBase, SOLR, Map-Reduce code, Experience in deploying software across the Hadoop Cluster using, Chef, Puppet, or similar tools, Familiarity with NIST 800 – 53 Controls a plus, Substantial experience, and expertise, in actually doing the work of setting up, populating, troubleshooting, maintaining, documenting, and training users, Requires broad knowledge of the Government's IT environments, including office automation networks, and PC and server based databases and applications, Experience using Open Source projects in Production preferred, Experience in a litigation support environment extremely helpful, Ability to lead a technical team, and to give it direction, will be very important, as will the demonstrated ability to analyze the attorneys' needs, and to design and implement a whole system solution responsive to those needs, Undergraduate degree strongly preferred; preferably in the computer science or information management/technology disciplines, 3+ years of software development and design, 1+ years developing application in a Hadoop environment, Experience with Spark, Hbase, Kafka, Hive, Scala, Pig, Oozie, Sqoop and Flume, Understanding of managed distributions of Hadoop, like Cloudera, Hortonworks, etc, Strong diagramming skills – flowcharts, data flows, etc, Bachelor's degree in Computer Science or equivalent work experience, 5+ years of software development and design, 3+ years developing application in a Hadoop environment, 3+ years of diverse programming in languages like Java, Python, C++ and C#, Well versed in managed distributions of Hadoop, like Cloudera, Hortonworks, etc, Understanding of cloud platforms like AWS and Azure, 5+ years experience in server side Java programming in a Websphere/Tomcat environment, Strong understanding of Java concurrency, concurrency patterns, experience building thread safe code, Experience with SQL/Stored Procedures on one of the following databases (DB2, MySQL, Oracle), Experience with high volume, mission critical applications, Sound understanding and experience with Hadoop ecosystem (Cloudera). Designs and implements modifications or enhancements to forms, menus, and reports, Implements processes for data management including data standardization and cleansing initiatives as part of the overall design, development, fielding, and sustainment of a system, Executes advanced database concepts, practices and procedures, Analyze, define and document requirements for data, workflow, logical processes, hardware and operating system environment, interface with other systems, internal and external checks, controls, and outputs, Design, develop and maintain ELT specific code, Design reusable components, user defined functions, Perform complex applications programming activities. Experience with sales process, sales teams, and customer facing positions is a plus, Knowledge of relational databases and non-relational databases: Hadoop experience is a premium, Broader technical knowledge of servers, storage and cloud, Work with database vendors to resolve database software bugs, Architect and develop ETL solutions across platforms, Understand application and system architecture, Troubleshoot OS Level event logs as well as database platform logging in resolving complex issues, Utilize multiple scripting languages such as PowerShell/Python/BASH shell scripts to administer and perform daily tasks, Handle intermediate level script development for automation, Good working experience of Microsoft Office suite, High degree of Motivation and adaptability, Influence skills to work with development teams whose short-term goals may differ, Responsible for implementation and support of the Enterprise Hadoop environment. Candidates with passion for coding and systems development from other disciplines also can apply, Work experience in a product company is an added advantage, Build and support scalable and durable data solutions that can enable self-service advanced analytics atHomeAway using both traditional (SQL server) and modern DW technologies (Hadoop,Spark, Cloud, NoSQL etc.) Junior Hadoop Dev / Ops Developer Resume Examples & Samples. Collecting the data when user uses … Knowledge and experience in Linux/Unix computer networking, Knowledge and experience in storage systems (SAN, NAS), Knowledge and experience with either Perl or Python programming, Experience with large-scale Linux environments, Experience with Hive, MapReduce, YARN, Spark, Sentry, Oozie, Sqoop, Flume, HBase, Impala, etc, Minimum of 3-5 years of experience working in Hadoop/ Big Data related field, Working experience on tools like Hive, Spark, HBase, Sqoop, Impala, Kafka, Flume, Oozie, MapReduce etc, Minimum 2-3 years of hands on programming experience in Java, Scala, Python, Shell Scripting, Experience in end to end design and build process of Near-Real time and Batch Data Pipelines, Strong experience with SQL and Data modelling, Experience working in Agile development process and has good understanding of various phases of Software Development Life Cycle, Experience using Source Code and Version Control systems like SVN, Git etc, Deep understanding of the Hadoop ecosystem and strong conceptual knowledge in Hadoop architecture components, Self-starter who works with minimal supervision. Ability to work in a team of diverse skill sets, Ability to comprehend customer requests & provide the correct solution, Strong analytical mind to help solve complicated problems, Desire to resolve issues and dive into potential issues, Good team player, interested in sharing knowledge with other team members and shows interest in learning new technologies and products, Ability to think out of box and provide innovative solutions, Desire to want to resolve issues and dive into potential issues, Great communication skills to discuss requests with customers, A broad knowledge of Hadoop & how to leverage the data with multiple applications, Bachelor’s Degree and 4+ years of experience; OR, High School equivalent and 8+ years of experience, A current Security + CE certification is required, Experience managing Hadoop Clusters, including providing monitoring and administration support, Minimum 2 years’ experience with Linux System Administration, Must possess strong interpersonal, communication, and writing skills to carry out and understand assignments, or convey and/or exchange routine information with other contractor and government agencies, Must be able to work with minimal supervision in high intensity environment and accommodate a flexible work schedule, Utilize open source technologies to create fault-tolerant, elastic and secure high performance data pipelines, Work directly with software developers to simplify processes, enhance services and automate application delivery, BS Degree in Computer Science/Engineering required, Experience with configuration management tools, deployment pipelines, and orchestration services (Jenkins), Familiar with Hadoop security knowledge and permissions schemes, Reporting to the Program manager on project/task progress as needed. Expertise in HDFS, MapReduce, Hive, Pig, Sqoop, HBase and Hadoop … Experience in installing configuring and using Hadoop … , Strong interpersonal relationship and communication skills, Ability to multi-task /change focus quickly, The big data universe is expanding rapidly. Prior work experience in the technology engineering and development is plus, 5+ years of advanced Java/Python Development experience (spring boot/python, server-side components preferred), 2+ years of Hadoop ecosystem (HDFS, Hbase, Spark, Zookeeper, Impala, Flume, Parquet, Avro) experience for high volume based platforms and scalable distributed systems, Experience working with data model, frameworks and open source software, Restful API design and development, and software design patterns, Experience with Agile/Scrum methodologies, FDD (Feature data driven), TDD (Test Driven Development), Elastic search (ELK), Automation of SRE for Hadoop technologies, Cloudera, Kerberos, Encryption, Performance tuning, and CI/CD (Continuous integration & deployment), Capable of full lifecycle development: user requirements, user stories, development with a team and individually, testing and implementation, Knowledgeable in technology infrastructure stacks a plus; including: Windows and Linux Operating systems, Network (TCP/IP), Storage, Virtualization, DNS/DHCP, Active Directory/LDAP, cloud, Source control/Git, ALM tools (Confluence, Jira), API (Swagger, Gateway), Automation (Ansible/Puppet), Production Implementation Experience in projects with considerable data size (in Petabytes PB) and complexity, Strong communication and written communications skills with the ability to be highly effective with both technical and business partners. Created complex … Please apply without delay, Responsible for data Ingestion, data quality, development, production code management and testing/deployment strategy in BigData development (Hadoop), Acts as a lead in identification and troubleshooting processing issues impacting timely availability of data in the BigData or delivery of critical reporting within established SLAs. Present the most important skills in your resume, there's a list of typical etl developer skills: Good interpersonal skills and good customer service skills. Is, if possible, certified on this level and Project Financials, KPI & Reporting, Should be experienced in Negotiation, Vendor Management, Risk Management, Continuous (Service) Improvement, Should have progression skills in Quality Management, Must have experience with RDBMS, Big Data -Hadoop Ecosystem HDP Platform -- Sqoop, Flume, Pig, Hive, Hbase and Spark, Must have experience with data modeling tools (ERWIN or ER Studio), Must have experience with Unix and Windows operating systems, Must have experience with ETL Tools (Informatica, SSIS, etc. Provide direction to junior programmers Handle technical documentation, architecture diagram, data flow diagram, server configuration diagram creation ... Lead Big Data / Hadoop Application Developer Resume … Hadoop Developer with 3 years of working experience on designing and implementing complete end-to-end Hadoop Infrastructure using MapReduce, PIG, HIVE, Sqoop, Oozie, Flume, Spark, HBase, and zookeeper. Jr. Hadoop Developer Career Jr. Hadoop Developer Interview Jr. Hadoop Developer Salary Jr. Hadoop Developer Resume. Involved in agile methodologies, daily scrum meetings, planning's. © 2020 Hire IT People, Inc. Responsibilities . This job includes setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, Pig and MapReduce access for the new users, Contribute ideas to application stability, performance improvements and process improvements. Primarily using Cloudera Manager but some command-line, Red Hat Enterprise Linux Operating System support including administration and provisioning of Oracle BDA, Answering trouble tickets around the Hadoop ecosystem, Integration support of tools that need to connect to the OFR Hadoop ecosystem. Lead and/or participate on teams focused on creating technical bulletins, procedures and support processes (Change Templates, FRO bulletins, Service bulletins, Support tools. Bank of America is one of the financial and commercial bank in USA, needs to maintain, process huge amount of data as part of day … Effectively identify change and use appropriate protocols to manage and communication this change effectively, Collect, maintain and distribute project status meeting minutes to stakeholders, Provide routine status reports and briefings to project team, customers and senior managers. Involves designing, capacity planning, cluster set up, monitoring, structure planning, scaling and administration of Hadoop components ((YARN, MapReduce, HDFS, HBase, Zookeeper, Work closely with infrastructure, network, database, business intelligence and application teams to ensure business applications are highly available and performing within agreed on service levels, Strong Experience with Configuring Security in Hadoop using Kerberos or PAM, Evaluate technical aspects of any change requests pertaining to the Cluster, Research, identify and recommend technical and operational improvements resulting in improved reliability efficiencies in developing the Cluster, Strong understanding of Hadoop eco system such as HDFS, MapReduce, Hadoop streaming, flume, Sqoop, oozie and Hive,HBase,Solr,and Kerberos, Deep understanding and experience with Cloudera CDH 5.7 version and above Hadoop stack, Responsible for cluster availability and available 24x7, Knowledge of Ansible & how to write the Ansible scripts, Familiarity with open source configuration management and deployment tools such as Ambari and Linux scripting, Knowledge of Troubleshooting Core Java Applications is a added advantage, 8+Years’ hands-on experience designing, building and supporting high-performing J2EE applications, 5+ years’ experience using Spring and Hibernate, TOMCAT, Windows Active Directory, Strong experience developing the Web Services and Messaging Layer using SOAP, REST, JAXB, JMS, WSDL, 3+ years’ experience using Hadoop especially Horton works Hadoop (HDP), Good understanding of Knox, Ranger, Ambari and Kerberos, Experience with database technologies such as MS SQL Server, MySQL, and Oracle, Experience with unit testing and source control tools like GIT, TFS, SVN, Expertise with web and UI design and development using Angular JS, Backbone JS, Strong Linux shell scripting and Linux knowledge, Code reviews/ensure best practices are followed, This person will need to have had exposure and worked on projects involving Hadoop or GRID computing, 10+ years project management experience in Large Enterprise environment, PowerPoint presentation skills - will be building PP presentations around said people/process improvements they have made suggestions for and presenting to senior level leadership, Managing of project end to end, team work set of mind and determined individual, *This can not sit remote, Must be able to work on W2 basis ONLY Without Sponsorship, Troubleshoot problems encountered by customers, File bug reports and enhancement requests as appropriate, Work with our issue tracking and sales management software, Partners with product owner(s) to review business requirements and translates them into user stories and manages healthy backlog for the scrum teams, Works with various stakeholders and contributes into produce technical documentation such as data architecture, data modeling, data dictionary, source to target mapping with transformation rules, ETL data flow design, and test cases, Discovers, explores, performs analysis and documents data from various sources with different formats and frequencies into Hadoop to better understand the total scope of Data Availability at Workforce technology, Participates in the Agile development methodology actively to improve the overall maturity of the team, Helps identifying roadblocks and resolving the dependencies on other systems, teams etc, Collaborate with big data engineers, data scientists and others to provide development coverage, support, and knowledge sharing and mentoring of junior team members, Escalate issues and concerns as needed on time, Must have a passion for Big Data ecosystem and understands the structured, semi-structured or unstructured data pretty well, The individual must have overall 10+ years of diversified experience in analyzing, developing applications using Java, ETL, RDBMS or any big data stack of technologies, 5+ years of experience working in such technical environments as system analyst, 3+ Years of experience into Agile (Scrum) methodology, 3+ years of hands-on experience with data architecture, data modeling, database design and data warehousing, 3+ years of hands-on experience with SQL development and query performance optimization, 3+ years of hands-on experience with traditional RDBMS such as Oracle, DB2, MS SQL and/or PostgresSQL, 2+ years of experience working with teams on Hadoop stack of technologies such as Map Reduce, Pig, Hive, Sqoop, Flume, HBase, Oozie, Spark, Kafka etc, 2+ years of experience in data security paradigm, Excellent thinking, verbal and written communications skills, Strong estimating, planning and time management skills, Strong understanding of noSQL, Big Data and open source technologies, Ability and desire to thrive in a proactive, highly engaging, high-pressure environment, Experience with developing distributed systems, performance optimization and scaling, Experience with agile and test driven development, Behavior Driven Development methodologies, Familiarity with Kafka, Hadoop and Spark desirable, Basic exposure to Linux, experience developing scripts, Strong analytical and problem solving skills is must, At least 2 years of experience in Project life cycle activities on DW/BI development and maintenance projects, At least 3 years of experience in Design and Architecture review, At least 2 years of hands on experience in Design, Development & Build activities in HADOOP Framework and , HIVE, SQOOP, SPARK Projects, At least 4 years of experience with Big Data / Hadoop, 2+ years of experience in ETL tool with hands on HMFS and working on big data hadoop platform, 2+ years of experience implementing ETL/ELT processes with big data tools such as Hadoop, YARN, HDFS, PIG, Hive, 1+ years of hands on experience with NoSQL (e.g.