Why is it that no AI anywhere ever in any app no matter the model and no matter how supposedly "smart" it is- has no idea about its own interface?
How hard is it to have it "swallow" a short document at the start of a convo, where it would get the current coordinates of every button in the very app it's supposed to be an agent in?
And then when I ask my agent, "Where's this or that?", it doesn't tell me to go read a book?!
And imagine this- this will make you rich if you're smart enough to listen!- to have it have the ability to really POINT IT OUT, show it, VISUALLY, highlight the button, or at the very least show a screenshot with the button marked.
Instead, the thing writes novels, poetry, about buttons that don't exist, like any other retarded AI.
Come on. I'll make the thing myself, but then you'll have to pay for it.