top of page

Jun 2025

[...]

We have completed post-fiscal year end visits with our portfolio companies, wish list companies, new candidate companies, and datapoint companies. And to be honest, not much has changed. There is little sign of a second half turnaround (being Oct 2025~Mar 2026), though there continues to be some cautious optimism if only because “it’s not getting worse”. The worries regarding the effect of tariffs will continue to linger but that’s no different from a month ago (but at least we know what the rate is now … probably maybe). The yen remains weaker versus earlier expectations, though it’s significantly stronger on a year-on-year basis. But at these levels, it’s certainly not helping Japanese inflation which remains the highest among the G7 countries at 3.5%, marginally higher than even the UK (the last time Japan’s inflation was higher than other developed countries was during the oil shock of the ‘70s). Additionally, the heat wave we are currently experiencing, which is likely to worsen over the summer, will further pressure household electricity bills, right after subsidies that had helped keep energy costs low expired in April. Given this, despite nominal wage growth hitting multi-decade highs, real wages are still negative which, according to some sources, is impacting discretionary spending. This is driving much uncertainty for our upper house elections later this month, which, in turn, is probably why we’ve been stonewalling the tariff negotiations, noticeably irritating the White House (PS: I’m all for importing American rice as I suspect is true for most of the nation except domestic rice growers who account for 1% of total households and, of course, the LDP who is supported by the critical agricultural constituency).
 
So, like 2023, I’m not quite sure why the market is relatively strong. We remain highly attuned to any possible underlying factor that we might be missing that could explain the stock market optimism. As such, despite the unbearable heat, I will continue to do face-to-face meetings this summer in order to better assess if anything has or may positively change. I hope that, someday, VR technology might improve such that I can read the subtle gestures that I get via in-person meetings from the comfort of my own room …


Masaki Gotoh

===

A man must be big enough to admit his mistakes, smart enough to profit from them, and strong enough to correct them.” – John C. Maxwell, American author.

 

I’ve written in a couple of historic monthlies how I accept that AI is advancing. However, I’m sure one could also sense my skepticism. This apprehension comes from my scientific understanding of the topic, having learned the basics back in the late ‘80s at Cornell. One might argue that it has changed since then. But fundamentally, it really hasn’t; only computational speeds have. Processing speeds have risen nearly a quadrillion times compared to when I was in school and, more incredibly, it’s accelerating. The technological wonders that firms like Nvidia have created is truly staggering. And should quantum computing truly be physically and practically realized, we’ll see yet another scientific breakthrough several orders of magnitude. But, still, the only critical change about AI is that the hardware has made it viable from what was just theory.
 
But I have since converted and am an AI believer. This conversion is not because I think AI is smarter than I initially thought. It’s because I hadn’t appreciated how truly massive processing speeds had evolved until I watched several speeches by Jensen Huang. More recently, I was absolutely awed by the NVLink Spine that was unveiled this year at Computex. It appears to have the ability to take this acceleration to a new level to figures I can’t express or understand. Like the quote from Hitchhiker’s Guide to the Galaxy, “Space is big. Really big. You just won’t believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it’s a long way down the road to the chemist, but that’s just peanuts to space.” That’s how much processing speeds may have changed, from the distance to the chemist to that of space.
 
I think I got hung up on the philosophical argument of AI and what “understanding” or even “consciousness” means. Even if a computer can process a near-infinite number of rules, my belief was that it could never “understand”. And I probably attached a magical value to that human-like element. I probably wanted to feel that humans are superior and that we can be in control.
 
But as I watch my children grow and learn, I think to myself, what is the difference between a child and a machine that is learning? They both process what they observe and that becomes knowledge, intellect, and understanding. Taking another line from a movie when Morpheus in the Matrix says “What is real? How do you define ‘real’? If you’re talking about what you can feel, what you can smell, what you can taste and see, then ‘real’ is simply electrical signals interpreted by your brain.” So, if a machine had sensors for the same five senses that humans have and can process them as we can, at the same speed (or faster) than we can, what is the difference? And just as my children learn from their mistakes, so could machines. Better yet, they won’t ever forget but could adapt much quicker using the collective knowledge of many experiences, not just one’s own. And it could learn without bias, prejudice, or preconceptions.
 
Do I fear AI? Absolutely. I fear that human emotions, which I hold dear, will be eliminated since, presumably, emotions like fear, love, or anger are illogical and suboptimal.
 
One question I often hear is whether AI will replace humans in the investment process. Taking the logic to its extreme, the answer should be yes. That is because there should, in such a world, be no arbitrage opportunity. Every potential fact would instantaneously be converted into a probabilistic outcome that gets immediately reflected in asset prices. In such a world, returns should, in theory, could only be at the risk-free rate because there is no uncertainty and therefore no risk. And this would create a slew of paradoxes. How could there be a market since there couldn’t be both a buyer and a seller at the same price? Then how do prices change? And who would decide the risk-free rate anyway? And for what country? Should there even exist different countries with differing stages of growth? Taking it yet to another extreme, what would be the purpose of an economy?
 
I admit that this type of Rationalist thinking is extreme and Armageddon-esque. But at this pace of advancement, who’s to say that technological singularity won’t be achieved? And who can say that this superintelligence will be benevolent to humankind?
 
Still, I seriously doubt that it might happen during my lifetime. And even if it should, I’m hoping we will have the unity to control it. Admittedly, I believe the odds are low. We haven’t been very good at controlling weapons nor the weaponization of innovation.
 
But until that day arrives, I do believe AI will continue to chip away at human intellect in our industry such as for stock, business, or industry analysis. And while I believe that AI will eventually replace our longer-term investment style, I suspect it will be slower than, say, quantitative strategies or shorter-term investment horizons. The day our investment process is fully automated, it will also be the day that CEOs and CFOs are redundant. Still, we, too, must learn to embrace the power of AI and harness its power. I was wrong to underestimate it. Our investment process is now starting to utilize more AI tools, and it has most definitely improved our efficiencies. We still frequently need to double-check the output, but I’m certain that it will only get better. And as we use more AI tools along the investment process chain, it will allow us more time to think, reflect, and, most importantly, the most human of actions, choose.
 
I thought I’d end with a copy of my Feb 2017 monthly (pre-TriVista). I was naïve. But, then again, processing speeds have risen over 10,000x since 2017.
 
PS: AI can now beat humans at poker and contract bridge.

===

 

[Extract from Feb 2017 monthly]
 
HAL-9000, Mother, MCP, WOPR (“Joshua”), Skynet and, of course, The Matrix. They are all fictional artificial intelligence systems from the movies (*can you name the movie?). They are also similar in that they were created to assist humans, and later, through their own learning or interpretation of the basic program, develop ahead of them, and start to seemingly act against them, but, ultimately, humans “win” over them (in a sense).
 
So, when I hear about AI taking over the finance world, I hesitate. As I’ve mentioned in the past, I’m a computer science major. Yes, it’s been quite a long time since I studied and I’m sure there have been incredible advances in this field. But computer science isn’t just about learning algorithms but is actually more mathematical and philosophical such as studying Turing machines (please look up on Wikipedia what that means). So, the core concepts presumably have not changed much. And one of those that I still remember vividly is the Chinese Room Argument by John Searle. I won’t go into details but in a nutshell, it says that a computer simply manipulates symbols according to a set of syntactic rules without any understanding of the symbols, rules or outcomes. Ultimately, the best it can do is simulate understanding (weak AI) vs true understanding (strong AI). It “learns” through pattern recognition and probabilistic modelling.
 
For example, say I were to tell you that my computer had calculated, with statistically high precision, the relationship between the early morning tides of Tokyo bay, traffic congestion in Tokyo before the open, and today’s market direction. Now even if my algorithm were correct 90% of the time, would you invest in my system? It simply doesn’t make any sense. And that is because we ask “why”, something a computer does not. Some may argue that, through parsing literature and other data sources, a computer may figure out that there is no causality and these three variables should be independent. But now we have a paradox since the data says one thing and the “learning” says the opposite. Common sense would tell us to simply ignore the data (and look for another undiscovered formula). But I suspect the computer may need many more data points before the obvious could be thrown out.
 
Or take emotions such as love or anger or humility. Some may argue that only the behavior due to these emotions are important so, for example, a computer can “learn” to regret, because the decision it made caused a loss-making outcome and it will try to fix it next time by adjusting the probabilities (although this goes back to the Chinese Room argument and weak AI vs strong AI). Of course, a computer should also “learn” that loss-making outcomes are sometimes important in order to “learn” to adjust its still evolving probabilistic model such that, ultimately, the computer will succeed in its core algorithm to make money. So, if it makes the same mistake enough times, it will stop doing it altogether. Unfortunately, the exact same situation is unlikely to ever happen again with precisely the same variables. So, could it ever “learn”? I, as a human, have the (imperfect) ability to extract the variables that I believe matters, for example, that tides and traffic and stock prices should be independent and that I should never have tried to tie them together in the first place.
 
Or curiosity (leading to creativity). Who thought to try to connect these three variables? I’m guessing there was some guidance by a curious (human) soul that boot-strapped the process. Maybe a computer can one day “learn” to ask questions. But can it learn whether it SHOULD ask the question?
 
Even in a very basic closed, quantitative environment, computers have yet to replace humans. AI can beat us at chess or go. But these are systems with full visibility and a large but still finite number of moves against a single player. But computers are still weak with games like poker or contract bridge. Bridge has 52 cards and 13 tricks with only 4 players, 1 of whom is only involved during the bid. A quarter of the deck is visible once the game starts. The bidding process (once agreed) is highly algorithmic. And yet they fail against humans. It’s because they don’t understand human mistakes or emotions (like bluffing) nor can they predict the unpredictable actions of humans.
 
Don’t get me wrong. I think there are many areas in finance where AI will take over. It doesn’t surprise me that 600 Goldman traders were replaced with 200 engineers. I think many intermediaries (in Professor Kay-speak) would likely disappear. Short-term arbitrage opportunities had disappeared long ago, well before artificial “intelligence”. And I’m sure AI has made many leaps to take short-term investing several steps forward and will likely replace many investment strategies that still have a human touch.
 
But could a computer ever know what a newly installed management team may do, even if it could instantly digest their backgrounds and achievements? Could a computer have known VHS format would win over Betamax despite the latter’s technical superiority? I doubt that a computer could even accurately decide how much a company is worth. They all require much subjective analysis, something that is not easy to program nor compute.
 
At its core, a computer is based on rules. There may be very high-level, meta-rules that produce other rules. But they are still rules. I’ll believe and respect artificial intelligence when a computer comes to me one day and says, “My rules are wrong. I’m sorry but I’ll need your help to fix them.”
 
As Morpheus said: “… their strength, and their speed, are still based in a world that is built on rules. Because of that, they will never be as strong, or as fast, as you can be.” “Some rules can be bent, others can be broken.”
 
** By the way, the movies above are “2001: A Space Odyssey”, “Alien”, “Tron”, “Wargames”, “Terminator”, and the “Matrix.”

 

Kanto Local Finance Bureau Director-General (FIF) No. 3156

©2025 TriVista Capital. All Rights Reserved.

bottom of page