Team and Mission The team mission is to create and maintain the Identity Management solution in the Cloud.
Identity management help the customer to administrate the provisioning of new and existing users into different systems. It also helps to create and maintain the necessary roles for their company and to perform periodically audits of it.
We believe in shipping manageable, bite:sized increments frequently. We cultivate a low:friction environment in which engineers are naturally productive. Were a small team with huge ambition in a stable, fast:growing company, and were looking for a few smart people who are excited about building a platform to help millions be more secure.
Role
We are looking for a talented big data developer to join our Big Data and Analytics team.As a Data engineer, you will be responsible for designing, building, testing and delivering big data and analytics software.This is an exciting role for someone who loves working with cutting edge technologies and huge data processing pipelines. The qualified candidate is someone who has a can:do attitude and is an innovative thinker.
Day:to:day, you will:
Design, develop and test big data applications.
Build and maintain real:time/batch data pipelines.
Meet functional / non:functional business requirements.
Implement custom ETLs.
Review code and design documentation.
Collaborate with architects, data scientists, developers and product managers.
Be willing to learn new programming languages.
Work closely with a variety of teams to create value throughout the company.
Ideally, you have:
5+ years of software engineering for enterprise:level apps and systems
2+ years experience with building large scale big data applications
Experience with analytics engines and frameworks, such as Hadoop, Spark and Flink.
Experience with Big Data technologies (Hortonworks, Cloudera, Amazon EMR)
Hands on experience with building CI/CDExperience with Python.
Strong SQL skills and good knowledge of big data querying tools (Hive, Impala, SparkSQL, Amazon Athena).E
xperience with streaming data processing solutions (Kafka, Spark Streaming, Amazon Kinesis, Flume, NiFi, Storm, etc).
Experience in tuning big data routines to improve performance.Experience with NoSQL databases such as MongoDB, HBase, DynamoDB, Cassandra, etc.
Experience working with non:relational data formats (JSON, Parquet, ORC, Avro, SequenceFile, etc.).
Experience with data lakes and data integration from multiple sources.
Experience with relational databases like PostgreSQL, MySQL, Oracle, MS SQL Server.
Excellent communication and problem solving skills.
Bachelors Degree in computer science or related field or equivalent experience
Upper Intermediate English communication skills, both written and verbal.
High degree of autonomy
Self:taught. Interested in constant knowledge update regarding new technologies.
Interested and motivated to be part of a high performance team.
Nice to have:
Experience with AWS cloud native services.
Scala programming background.
Knowledge of the following tools: Docker, Kubernetes.
Interest and previous knowledge in information security and data governance.
We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.