Aerodyne is a leading 360DT3 drone-based enterprise solutions provider, ranked as the world's #1 drone service provider for two consecutive years by Drone Industry Insights (DII) in 2021 and 2022. Our holistic approach to Drone Tech, Data Tech, and Digital Transformation (360DT3) helps organizations overcome complex industrial challenges by leveraging drone data and AI-powered analytics. We adopt a client-centric approach and collaborate with organizations using the Build-Operate-Transfer (BOT) model, empowering them to establish their own in-house drone capabilities while capitalizing on our expertise and experience.
Aerodyne employs over 1,000 drone professionals to operate on an unprecedented level in the Unmanned Aircraft System (UAS) services sector, having managed more than 752,700 infrastructure assets with 458,058 flight operations and surveyed over 380,000 km of power infrastructure across 45 countries globally.
Overview
As a DevOps Engineer at our company, you will have the opportunity to work with cutting-edge technologies, collaborate with talented teams, and contribute to our mission of delivering the best solutions for our customers. You'll spearhead the automation of processes across software building, testing, deployment and infrastructure management, ensuring seamless collaboration between development and operations teams. You'll drive efficiency, scalability, security, and reliability within our ecosystem.
Key Deliverables
- Lead the design, implementation, and enhancement of CI/CD pipelines utilizing tools like GitHub Actions and GitLab CI/CD to automate software delivery.
- Utilize Terraform to automate infrastructure provisioning and configuration, ensuring consistency and scalability.
- Manage Kubernetes clusters to deploy, scale, and manage containerized applications efficiently.
- Utilize Docker to package applications into containers, enhancing portability and scalability.
- Implement and manage Istio for traffic management, security, and observability within microservices architectures.
- Employ HashiCorp Vault to securely manage sensitive information and access control.
- Utilize Cloudflare for edge computing, CDN, and DDoS protection, enhancing performance and security.
- Implement monitoring solutions with Grafana and Datadog for real-time insights into system performance and health.
- Manage RabbitMQ to enable asynchronous communication between microservices, ensuring scalability and reliability.
- Utilize MongoDB Atlas for managing and scaling MongoDB databases, ensuring high availability and data integrity.
- Implement SonarQube/SonarCloud to analyze code quality and security vulnerabilities, ensuring robust software.
- Employ Apache Airflow for orchestrating complex workflows and data processing pipelines.
Competencies (Skills & Knowledge (experience) and Attributes)
- Experience with AWS, Azure or any renowned cloud service provider.
- Experience with Kubernetes, load balancing, Ingress and corresponding cloud managed service counterparts like AWS EKS etc.
- Experience with Python or javascript or a similar programming language is preferred.
- Database sharding
- Infrastructure as code (Terraform/helm)
- Experience with setting up custom auto scaling logic and configuring it with the combination of custom code and helm charts whatever applicable.
- Experience with setting github actions workflows or misc. CI/CD like Jenkins, CircleCI etc.
- Extensive experience with docker/docker-compose
- Ability to participate in an on-call rotation and have the patience to communicate with a variety of interdisciplinary teams and users.
- Hands-on experience in Windows, Linux or UNIX Systems Administration
- Familiarity with version control systems such as Git and experience working with branching and merging strategies
- In-depth knowledge of Linux administration, including shell scripting and network configuration.
- Excellent problem-solving skills with a strong attention to detail
- Familiar with server virtualization and management.
Formal Qualifications
- Education: Bachelor's degree in Computer Science, Computer Engineering, Information Technology, Electrical Engineering or a related discipline. A Master's degree is a plus.
- Certifications: Cloud and DevOps certifications are a plus
Do you have what it takes to succeed in a fast-paced and intense environment? Do challenges make you do a happy dance? Are you a wizard of wild and wacky ideas, wanting to turn them into reality? If you're all about smashing the old and embracing the bold, turning oops into woohoo, and aiming for nothing less than mind-blowing, then guess what? You're the superstar we're hunting for!
So, buckle up and hop on this thrill ride – let's create awesomeness together!
Aerodyne is a leading 360DT3 drone-based enterprise solutions provider, ranked as the world's #1 drone service provider for two consecutive years by Drone Industry Insights (DII) in 2021 and 2022. Our holistic approach to Drone Tech, Data Tech, and Digital Transformation (360DT3) helps organizations overcome complex industrial challenges by leveraging drone data and AI-powered analytics. We adopt a client-centric approach and collaborate with organizations using the Build-Operate-Transfer (BOT) model, empowering them to establish their own in-house drone capabilities while capitalizing on our expertise and experience.
Aerodyne employs over 1,000 drone professionals to operate on an unprecedented level in the Unmanned Aircraft System (UAS) services sector, having managed more than 752,700 infrastructure assets with 458,058 flight operations and surveyed over 380,000 km of power infrastructure across 45 countries globally.
Overview
As a DevOps Engineer at our company, you will have the opportunity to work with cutting-edge technologies, collaborate with talented teams, and contribute to our mission of delivering the best solutions for our customers. You'll spearhead the automation of processes across software building, testing, deployment and infrastructure management, ensuring seamless collaboration between development and operations teams. You'll drive efficiency, scalability, security, and reliability within our ecosystem.
Key Deliverables
- Lead the design, implementation, and enhancement of CI/CD pipelines utilizing tools like GitHub Actions and GitLab CI/CD to automate software delivery.
- Utilize Terraform to automate infrastructure provisioning and configuration, ensuring consistency and scalability.
- Manage Kubernetes clusters to deploy, scale, and manage containerized applications efficiently.
- Utilize Docker to package applications into containers, enhancing portability and scalability.
- Implement and manage Istio for traffic management, security, and observability within microservices architectures.
- Employ HashiCorp Vault to securely manage sensitive information and access control.
- Utilize Cloudflare for edge computing, CDN, and DDoS protection, enhancing performance and security.
- Implement monitoring solutions with Grafana and Datadog for real-time insights into system performance and health.
- Manage RabbitMQ to enable asynchronous communication between microservices, ensuring scalability and reliability.
- Utilize MongoDB Atlas for managing and scaling MongoDB databases, ensuring high availability and data integrity.
- Implement SonarQube/SonarCloud to analyze code quality and security vulnerabilities, ensuring robust software.
- Employ Apache Airflow for orchestrating complex workflows and data processing pipelines.
Competencies (Skills & Knowledge (experience) and Attributes)
- Experience with AWS, Azure or any renowned cloud service provider.
- Experience with Kubernetes, load balancing, Ingress and corresponding cloud managed service counterparts like AWS EKS etc.
- Experience with Python or javascript or a similar programming language is preferred.
- Database sharding
- Infrastructure as code (Terraform/helm)
- Experience with setting up custom auto scaling logic and configuring it with the combination of custom code and helm charts whatever applicable.
- Experience with setting github actions workflows or misc. CI/CD like Jenkins, CircleCI etc.
- Extensive experience with docker/docker-compose
- Ability to participate in an on-call rotation and have the patience to communicate with a variety of interdisciplinary teams and users.
- Hands-on experience in Windows, Linux or UNIX Systems Administration
- Familiarity with version control systems such as Git and experience working with branching and merging strategies
- In-depth knowledge of Linux administration, including shell scripting and network configuration.
- Excellent problem-solving skills with a strong attention to detail
- Familiar with server virtualization and management.
Formal Qualifications
- Education: Bachelor's degree in Computer Science, Computer Engineering, Information Technology, Electrical Engineering or a related discipline. A Master's degree is a plus.
- Certifications: Cloud and DevOps certifications are a plus
Do you have what it takes to succeed in a fast-paced and intense environment? Do challenges make you do a happy dance? Are you a wizard of wild and wacky ideas, wanting to turn them into reality? If you're all about smashing the old and embracing the bold, turning oops into woohoo, and aiming for nothing less than mind-blowing, then guess what? You're the superstar we're hunting for!
So, buckle up and hop on this thrill ride – let's create awesomeness together!