About Us
- VICI Holdings
is a top-tier company focused on cutting-edge technology and financial trading, driven by an engineering- and data-first culture that builds market-leading trading systems and methodologies. - Our hardware and software teams jointly develop ultra-low-latency, high-performance digital and data infrastructure on par with Wall Street's best.
- Our strategies span stocks, futures, and derivatives, scaling across markets and asset classes with robust risk controls and automation.
- With daily global trading volume surpassing tens of billions of NTD, we demonstrate our leadership in technology, execution excellence, and operational scale.
- We prioritize long-term thinking, talent growth, and open knowledge-sharing, offering a fast-learning environment with real ownership and impact.
About the Position and Team
- This role is a key managerial position within the Data Team, focused on end-to-end data operations—including data collection, data mining, and data cleaning—to ensure real-time reliability and usability for trading and research
- .You will drive cross-functional alignment (infrastructure, strategy research, platform engineering), translating business needs into measurable SLOs/SLIs for data services
- .You will build and strengthen observability, data governance, and quality metrics with incident retrospectives and continuous improvement mechanisms
- .With automation at the core, you will elevate deployment, monitoring, alerting, and recovery capabilities of data pipelines and platforms
- .You will drive standardization and documentation—data dictionary, lineage, and compliance workflows—to ensure consistency and maintainability as the team scales
Responsibilities
- Data Operation
s: Own day-to-day operations of data platforms/pipelines—capacity & cost management, upgrades, deployments, and recovery drills—to sustain high availability and low latenc - y.Data Collectio
n: Design/manage multi-source ingestion (exchanges, vendors, internal systems), protocol parsing, and robust retry mechanism - s.Data Minin
g: Partner with research/strategy teams to extract high-value features from raw and semi/unstructured data, building reproducible exploration and evaluation workflow - s.Data Cleaning & Qualit
y: Implement rule-based and statistical/ML checks (completeness, uniqueness, time alignment, missing/outlier handling) with automated remediation and backfillin - g.End-to-End Pipeline
s: Plan and maintain scalable ETL/ELT—including scheduling, caching, partitioning, schema evolution, and lineage—to support both backfills and real-time streamin - g.Reliability Engineerin
g: Build observability (metrics/traces/logs), SLO/SLI, anomaly detection, and high-priority incident response (on-call) to reduce MTTR and increase change success rat - e.Security & Complianc
e: Enforce access control, auditing, encryption, and data classification to meet internal/external audit and security standard - s.Automation & Toolin
g: Use IaC, data versioning, data tests, and CI/CD to improve predictability and reduce manual ris - k.Documentation & Knowledg
e: Maintain searchable runbooks, knowledge bases, and data dictionaries to speed onboarding and cross-team efficienc
y.
Requirements
- 5+ years in data engineering/data platforms/backend infrastructure, with 2+ years of leadership or mentoring experience (team lead/tech lea
- d).Proficiency in at least one data stack—Python (Pandas/PySpark), Java/Scala (Spark/Flink), or Go—balancing performance with maintainabili
- ty.Hands-on with batch and streaming: scheduling (Airflow/Prefect), messaging (Kafka/Redpanda), and lakehouse/warehouses (S3/Delta/Iceberg/Snowflake/BigQuer
- y).Deep understanding of data quality & governance: schema evolution, lineage, catalog/permissions, privacy/compliance; able to design observable and auditable quality syste
- ms.SRE mindset and practice: monitoring/alerting, capacity planning, game days, incident management & RCA—consistently reducing blast radius and MT
- TR.Comfortable with Linux and cloud/hybrid (AWS/GCP/on-prem) using containers and IaC (Docker/K8s/Terrafor
- m).Strong communication skills to translate between engineering and trading/research nee
ds.
Nice to
- HaveExperience with market data (exchange feeds, L1/L2, derivatives) or vendor integrati
- ons.Familiarity with time-series/event data models, compression, and efficient storage (e.g., columnar/TS
- DB).Hands-on with data testing/observability tools (Great Expectations, DBT tests, OpenTelemetry, Monte Carlo, et
- c.).Understanding of low-latency system needs (time alignment, consistent backfills, edge cases) with engineered soluti
- ons.Working proficiency in English is a plus.