16 years of experience in development, including 8 years leading teams of up to 30 people. I have developed distributed computing systems, data storage, analytics systems, and ML/GenAI systems of various scales in finance, marketing, b2b services and more
Skills
16 years in Software Development
12 years in Big Data Engineering
8 years as a Tech Lead
Assembled multiple large scale teams in development and marketing
Built several Big Data systems and tools from scratch to production
Deep LLM understanding, and general experience in ML
Worked with clusters of up to 2000 nodes and 5 PB of data
Deep SQL/NOSQL databases knowledge
Experience with AWS, GCP, Databricks cloud providers
Immense experience building distributed systems
Profound domain expertise in Finance field
Understanding of modern design practices and workflows
Knowledge in marketing: performance, PR, community
Experience
(2022−2024) Own businesses / stealth startups
As a co-founder, I participated in creating a marketing business at the intersection of performance marketing and finance.
As a founding engineer, I am involved in creating a startup in the finance sector.
As a co-founder, I am involved in creating a startup in the GenAI field.
GPB Investments (Head of Development)
Built the engineering team from scratch, growing it to 30 skilled developers with primary focus on Java and React engineers.
Curated and actively participated in development of an in-house framework that ensures system stability, increases speed of feature implementations and improves code quality.
Led the overall development process, overseeing and guiding the team’s efforts.
Organized technical meetups and delivered lectures on Big Data to engage and involve the technical staff, mentored and guided engineers.
Reltio.com
Led a team of around 10 Big Data Engineers and QA.
Collaboratively developed a Spark-based platform to process, analyze, and store customer data.
Constructed a serverless database leveraging Spark and the Parquet format, incorporating our own format for incremental data to enhance analytics capabilities.
Implemented services using AWS and GCP.
Optimized Spark and Cassandra for improved speed and cost efficiency.
VK.com
Worked with a Hadoop cluster of 1500 nodes and HDFS with 5 PB of data.
Developed a Hadoop-based system to process 200 GB of new data from various sources daily.
Created different algorithms to process several TB of data per single Hadoop job.