Touch first UIs can often appear playskool looking in nature when a pointer is introduced. So how do we adjust content by way of input types? Do new applications need to have multiple levels of information resolution?
ex. Docking a tablet to a larger external monitor with keyboard and mouse; the information and layout stays fixed to the assumption that it is on a 11” screen.
I was at a conference recently and a booth had a Windows 8 Pro 11′ tablet docked and working off a larger screen for doing heavier workloads. The downside was that most Modern apps maintained the 11” scale. The modern weather app, when docked, was a 23″ magnification of the 11″ tablet and completely disregarded all the added real estate and/or the additional input options (mouse/keyboard). The immediate feeling was that there is almost a need to be able to adjust for input.
Information/content resolution and their corresponding UI target sizes should expand and contract based off the input tools that are present, not just by stylesheets associated to the screen size. The logical expectation is that when I add a mouse/pointer that I want to address the UI in a sharper resolution, making more space available for content:
Content is demanded to respond to the glass size, but content should truly meet at the intersection of input method and glass size.