my main goal for 2026 is to continue to branch out into new edges, and question my accepted ideology that I can only compete in the risk premia arena and spend months on end on drawdowns.
that might be true, but I gotta see it for myself.
I also think I've always framed the question in the wrong way and its more of a spectrum rather than a binary answer.
how much more sharpe can I generate, on top of what I currently already do?
I've seem a glimpse of it this year already and I am spending all of my time designing my entire infrastructure to make it feasible.
even if I dont make it work, I now have a more robust infra, unrecognizable to what it was before.
I've ensured that my execution engine can support this and scale by:
- taking any type of model - standard data sources + alternative data - cross exchange execution - scalable record keeping - risk alerts
now I've been spending a lot of time making sure that my "alpha" generation engine is super efficient and I can take an idea from ideation to simulation (if applicable) into prod with a click of a button.
also making the effect modeling again done with a click of a button and dont have to be restructuring each time I want to test a new idea (@systematicls @quant_arb ).
making alpha generation almost factory like so I can keep myself competing without exhausting myself in adjacent development tasks that are non-pnl generating by nature.
a lot to get done and excited for this new chapter.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
my main goal for 2026 is to continue to branch out into new edges, and question my accepted ideology that I can only compete in the risk premia arena and spend months on end on drawdowns.
that might be true, but I gotta see it for myself.
I also think I've always framed the question in the wrong way and its more of a spectrum rather than a binary answer.
how much more sharpe can I generate, on top of what I currently already do?
I've seem a glimpse of it this year already and I am spending all of my time designing my entire infrastructure to make it feasible.
even if I dont make it work, I now have a more robust infra, unrecognizable to what it was before.
I've ensured that my execution engine can support this and scale by:
- taking any type of model
- standard data sources + alternative data
- cross exchange execution
- scalable record keeping
- risk alerts
now I've been spending a lot of time making sure that my "alpha" generation engine is super efficient and I can take an idea from ideation to simulation (if applicable) into prod with a click of a button.
also making the effect modeling again done with a click of a button and dont have to be restructuring each time I want to test a new idea (@systematicls @quant_arb ).
making alpha generation almost factory like so I can keep myself competing without exhausting myself in adjacent development tasks that are non-pnl generating by nature.
a lot to get done and excited for this new chapter.