On AI, Coding, and Quant Research
Observations from an AI-free break and a return to GitHub and papers
Happy New Year! I know it is nearly the end of January. I had a very slow start to the new year after traveling for three weeks, adjusting to jet lag, and following an immediate long weekend. It has been hard for me to get back to work at full speed. However, I really enjoyed my AI-free vacation, where I completely relaxed and spent time with my family. Without AI and without constantly checking the news for three weeks, I was surprised that I still learned something and continued learning even without access to news or external information.
I learned introspectively from my inner self by looking back at my accomplishments in 2025 and how they directed me to where I am today. Without external distractions, I was able to think about what is missing, what is not working, and what feels extremely inefficient in 2025. I also reflected on what I promised myself I would do in 2025 but ended up not doing. Should I revisit those goals and take a deeper look? I have to be honest that I am still processing all of this and slowly setting up my resolutions for 2026.
Just a quick recap of what I learned in the past few weeks. As mentioned earlier, my learning did not necessarily come from new information.
Coding Fluency in an AI-Assisted Workflow
First, I realized that I have gotten rusty in coding. I have noticed that I have become inefficient at writing my own code. I have been using AI-assisted coding and serving more as a code reviewer of AI-generated code. Slowly, I am losing my ability to write code from scratch. You may argue that it is more efficient to have AI generate code nowadays. However, I have started to wonder whether I should allow myself to get rusty in coding.
Just like writing, the more you practice, the more fluid and creative your thinking becomes. The same applies to coding. Yes, we write code for work and to build practical systems. But coding is also a way of thinking. It allows ideas to flow and helps uncover new ways to improve existing infrastructure. If I rely solely on AI to generate code, will I lose the ability to brainstorm with myself? I have started to wonder whether I should allocate a fixed percentage of time this year purely to coding.
Where AI Cannot Optimize Yet
Looking back at 2025, beyond professional accomplishments, I spent a lot of time on things that cannot be done digitally, and I still do not know the best way to improve efficiency there. With all the data and information online, we can use AI to search, extract, and consolidate information. Efficiency has improved dramatically in the digital world. We can have AI generate code, use AI assistants to organize our days, and rely on AI shopping tools to find products. These are great accomplishments in the digital space.
However, in 2025, most of the inefficiency I experienced came from things that can only be done offline or are not fully digitized. For example, I spent the last two months of 2025 touring different schools and trying to understand the education system in San Francisco. I mentioned that I created a NotebookLM to summarize school reports. While it was very helpful in narrowing down the list of schools to visit, many times it was still hard to know whether a school was a good fit without seeing it in person.
These are difficult problems to optimize with AI, and I have started to wonder what the best approach is. Education is one example. There are many others, such as finding a good plumber. You may argue that online platforms like Angi or Thumbtack can help. However, for professions like this, the best providers are often not online. They rely on referrals and already have more work than they can handle, so they do not need an online presence. Another example is finding a good body shop for car repairs. It is hard to trust online reviews when the information is limited.
Lastly, for non-digitized data in small businesses, much of it is simply lost. Without data, it is hard to understand how to help them optimize. I have spoken to some AI experts who believe it does not make sense to digitize such unique datasets because there is no clear financial incentive. In that case, how do we help small businesses thrive instead of being overtaken by larger competitors that use AI? While AI has helped digital natives become more efficient, what about those who are not fully online?
Anyway, here are a few AI-related developments I read about this week.
I came across a very interesting repository called AI Investment Agent. In previous posts, I talked about different structures for building multi-agent systems. This repository presents a unique system that mimics the equity research workflow. It uses multiple agents in parallel to retrieve different types of data, such as news analysis, market analysis, and fundamental analysis. It then introduces a debate system that consolidates inputs from different perspectives, such as bullish and bearish researchers, before sending a recommendation for decision-making. I have not had a chance to test it yet, but I think it is a great example of a multi-agent use case in the investment space.
LLMs for Quant Investment Research
This paper was particularly interesting to me. It explains how LLM applications have evolved, from NLP-based textual analysis to more advanced use cases in recent years. It proposes several applications, including LLM assistant, LLM quant, and augmented financial intelligence, or AFI.
LLM assistant focuses on processing textual documents and code generation. This is currently the most common use case, as it is relatively low risk and has a strong track record of improving operational efficiency. LLM quant moves into more complex tasks, such as feature generation and signal extraction from textual data. The most intriguing concept for me is AFI, where LLMs are used to create alternative datasets by compiling human expertise, analyst recommendations, and trading behavior. It can also work in reverse by making quantitative trading decisions more explainable.
There is still a long way to go for AFI. I remember discussions years ago about quantifying decision-making behavior for traders who rely heavily on intuition. That did not seem feasible at the time, but AFI appears to be moving us a bit closer.
Lastly, I came across the FinanceDatabase repository. While it is not strictly AI-related, it can be very useful for investment professionals. The repository consolidates securities across various asset classes globally to create a centralized database. It offers an easy-to-use Python package that allows users to search and filter securities in different ways. I am excited to see this as an open-source solution for those looking for security masters and security definitions.
It is still a slow start to the year for me, so I do not have much more to say. Stay tuned, and I should be fully recovered from travel fatigue by my next post in two weeks. Have a great weekend.

