If you got it somehow trained on 30,000 calculator GUIs it still might get a working calculator GUI out, but that would be much more effort than just doing it normally. And it could never generalize it to other programs, especially not to new ones, where no training base exists.
There is a difference in composing a described image and engineering an interface. The image has to satisfy it's constraints (eg. "a spheroid blue giraffe"), the interface has to be useful to the user.
There are endless ways to draw something that is spheroid, blue-ish and looks somewhat like a giraffe, and all of them are correct.
Similarly, there are endless ways to draw "Three buttons and an input field"...but not all of them are useful. If I put the input field in the upper left corner, let two buttons overlap in the middle, and put the third one in the lower right, I satisfied the constraints, but who wants to use the result?
Sure, I could make the description more precise, that is, make the constraints so narrow that in order to fulfill them, the AI has no choice but to generate exactly what I want. But at that point, I already did the work myself.
The idea is that a domain-expert drafts (specs out) the I/O fields and their order. Then the tool suggests UIs.
Right now the tool generates one working prototype. But soon it will generate variations.
Check it out it’s 100% free, advise and feedback is appreciated.