Description:
Data Architect
About the Role:
Client is seeking a mid-level Data Architect with 3–7 years of experience to join our growing insurtech team. In this role, you will design, optimize, and maintain data architectures that power key insurance products, analytics, and operations. You’ll collaborate with cross-functional teams to ensure data is reliable, scalable, secure, and aligned with business needs in a fast-evolving insurance technology environment.
Key Responsibilities:
Design, document, and maintain scalable data models, data pipelines, and storage solutions for insurance products and analytics.
- Architect and optimize cloud-based data platforms (e.g., AWS, Azure, and GCP) for performance, cost efficiency, and security.
- Collaborate with engineering, product, actuarial, underwriting, and data science teams to understand data requirements.
- Implement data governance, data quality, and metadata management practices.
- Define and enforce data architecture standards, patterns, and best practices.
- Support integration of third-party data sources, including telematics, claims data, customer data, and policy data.
- Perform root cause analysis and troubleshoot complex data issues.
- Partner with security teams to ensure compliance with industry regulations (e.g., HIPAA, SOC 2, state insurance requirements).
- Evaluate new technologies, tools, and frameworks that can improve data capabilities.
- Produce clear technical documentation, diagrams, and guidelines.
Required Qualifications:
- Bachelor’s degree in Computer Science, Data Science, Information Systems, Engineering, or a related field.
- 3–7 years of experience in data architecture, data engineering, or a similar role.
- Strong experience with cloud data platforms (AWS, Azure, or GCP).
- Proficiency in data modeling (conceptual, logical, physical) and database design for relational and NoSQL systems.
- Hands-on experience with ETL/ELT pipelines, data orchestration tools (e.g. Airflow, dbt), and data warehousing.
- Strong SQL skills and familiarity with distributed computing frameworks (e.g., Spark).
- Experience designing data solutions for analytics, reporting, and machine learning workflows.
- Understanding of data security, privacy, and governance best practices.
- Excellent communication and documentation skills.