Tag Archives: computers

Multi-personal multi-input computer administration

I was just watching a Technology Review interview of Jeff Han, the “father” of the current multi-touch fad in computing (I don’t mean that in a bad way to him, its just that the guy’s video of multitouch interaction was followed within the next two years by other companies, including Apple and MS, launching their bids for multi-touch interactive devices with commercial blitzkriegs and grandiose announcements.

However, I soon had an idea on the multi-touch devices that will become much more sophisticated in the future.

If the laptops, mobile devices and desktops of the present are personal computers, then does the Microsoft Surface and other large multi-touch display devices that can handle interaction between more than one person at the same time count as a “multi-personal computer”?

The larger multi-touch devices that can handle input and output from more than one or two individuals concurrently should allow (I think) the opening of more than one instance of an installed application so that users of a large multitouch device can use the device to the fullest extent without getting in each other’s way.

I think that the administration of settings for large multitouch device users goes beyond just setting up a username and password for each user and his/her personal settings. It can also go into the graphic compartmentalization of spaces on a device allocated for each concurrent user.

I mean, if servers (which are, technically, multi-user) have to allocate logins and privileges for each user over the network, then why not the multi-personal multitouch computers?

But still, how does one administer a graphical multi-touch environment?

GPPPU?

In relation to what I wrote yesterday, I would like to mention my idea of what Nvidia could be working on with their purchase of AGEIA in February.

I call it the GPPPU: General-purpose computing on physics processing units.

In other words, a physics processing unit that can also handle most of the applications that are usually dedicated to GPUs. It will be akin to the GPGPU, which uses “a GPU, which typically handles computation only for computer graphics, to perform computation in applications traditionally handled by the CPU”.

In the GPGPU arena, Nvidia is already competing with ATI (now owned by AMD); it is also in a war of words with Intel over the fate of the traditional CPU.

But if Nvidia is working on making a GPU that can do a CPU’s job, then how will they have the time and money to make a PPU that can do a GPU’s job?

On desktop GUIs for multiple monitors

OK, this one *will* be short.

I think that I’ve heard a criticism on Digg of the user interface of Mac OS X when it comes to multiple displays. In fact, I think that neither OS X nor Windows are built or created for the multiple-display interface.

For one, both interfaces possess “bars” which span the length of the screen: OS X has the “Apple menu“, while Windows has the “Taskbar.”

IMO, these items, by drawing the mouse icon to the area within the single screen, simply reinforce the single-screen metaphor, while providing no functional space or capacity to the next screen.

Another GUI element that discourages one from effectively using more than one display for a GUI is the application window. It, like the above menubars, spans the length of the screen and has to be dragged from one screen to another if it is being used in a multidisplay.

The fact that all of the visible contents of the application are inside of the window doesn’t help, either. Tabs within windows (especially in most modern web browsers), while allowing one to navigate more than one document without having to close the window, also reinforce the window’s single-purposed feel and look.

The dock might be an answer – maybe a top and bottom dock that don’t span the length of the screen and perform specific functions for the UI – but it is also a very old metaphor, dating from the late 1980’s.

Hopefully, a new GUI element will come in the future that will redefine how we view our user interfaces with an expansion of our displays.

Short: Dynamic tangible user interfaces

I think that the next-generation dynamic tangible user interfaces (programmed computer expressions which are shown in a changing-by-the-second physical form) can make use of the toys of yesteryear, such as these:

Imagine trying to send needle imprint data from one side of the continent to a needle bed at the other side, which will then interpret the data in a form that will result in the above pictures.

That’s my idea of a tangible user interface, where physical impressions can be transmitted over a network to a separate tangible medium which then duplicates the physical impressions, and vice versa.