More seriously, a lot of testing is likely already taking place and slowly making its way through the tech-front-lines to the tech-rear so to speak. I would guess that right now there are 10K+ officer-level IT, engineering, and mathematics specialists who are prying with all their might to get institutional buy-in to leverage their take on what AI can do. In military forces around the world.
IMO, creative logistics will not be as amenable to positive intervention by AI. And I expect that this will reinforce the outcome that supply line attacks will be more popular, with lasting consequences. This will be more true, the tighter the feedback loop for the attacker, and the more rigid the supply line for the defender.
I would guess that defensive strategies for logistics will deepen significantly on 25+ year timelines. Likely tilting more toward moving from establishment to flexibility as soon as possible.
Battlefield strategy will need to keep up as well. Flexibility in materials and quick-fab is going to be absolutely huge, especially with smart materials if possible.
At a deployed battalion-to-squad level there will need to be sensory-characteristics-to-AI pathway translation/establishment and hardening in systems. This should be both broad and deep and will likely be part the new universal soldier concept to follow last-time's universal soldier concept.
I would hope to see at a minimum, the establishment of an objective rating system for a battlefield situation's AI-amenability, to inform combat decision-making.
Along with a general economic institutionalization, I expect open source intel to broadly and greatly constrict in terms of reliability and availability. People will realize that if you give an inch to an AI-enhanced adversary, it will try to take 1,000 miles.
Specialists will also take de facto hypnosis/jailbreaking mindsets as a new default tool for the direct AI interaction toolset.
I expect AI to also be used for counterintel to greatly restrict OSI. But this probably more minimally.
Just some thoughts though.
At the same time, I would hope the UN could come together and draft a world treaty that none of these weapons are able to be operated without human supervision. Humanity needs an "AI non-proliferation" treaty for militaries to all agree on saying "AI will never be allowed to "pull the trigger" on human life -- Remotely operated systems must always keep a human being on the other end.".