Tag Archives: iphone

I’m late to this news but… (also, more ideas on an IDE)

Apple finally turned the iPhone (and iPod touch) into a Wi-Fi remote control for the Apple TV on the 10th.

Finally.

EDIT: I’m still trying to conceptualize a mobile OS IDE that can be used from within a mobile OS rather than the desktop.

I mean sure, extending a mobile OS to take on even more capabilities and interact with or control other devices is a good thing, but should software development for mobile device operating systems only reside on desktop operating system IDEs?

But if it shouldn’t, then how does one turn the mobile devices like the iPhone and iPod touch from perpetual receivers and consumers of software code into tools for production of software code?

Just like graphic and audio editing software, IDEs usually take up a number of windows on a desktop in order to keep the various options and functions perpetually within the relative periphery vision of the editing user. Furthermore, like the bare-bones text editors, they are also intensive upon the keyboard for input of source code into the text area.

On mobile devices, however, there is very little screen real estate for windowed applications. Even on the current iPhone OS, which has no physical keyboard, the virtual keyboard which pops up every time that one needs to type in a text field (say, for Google or Yahoo search) tends to take up around half of the screen from the bottom up.

So I look at the keyboard metaphor, and I wonder about how it could be best applied in the creation of content, or even software code.

What if the virtual keyboard in the iPhone OS can have dynamically-added extra buttons for code preset templates?

It could work similarly to how wiki articles on Wikipedia or Wetpaint allow one to highlight a word or set of words and choose a way to format the highlighted section, such as Bold, Italics, Underline, Strikethrough, <ref></ref> or Soft Redirect. The graphical cues for creating page templates are even more apparent on Wetpaint and Pbwiki, which uses alot more Ajax to generate such cues.

Even in desktop IDEs, code templates are selected for any new project through graphical means, as shown in the most recent iteration of Dashcode.

So graphical code template buttons are one way to extend the virtual keyboard’s functionality into creative territory. Such buttons could, upon being tapped, indicate the placing of an entire snippet of code for each tapped button in the top text area.

Another approach to developing software on the iPhone OS from within the iPhone OS is to develop brand new computer languages that would be entirely “graphical” or “non-textual”: in other words, while being rendered by software that was written and compiled in textual form, the code that would be tapped from the virtual keyboard’s dynamically-added symbol buttons would not result in a placing of text code, but rather a placing of graphical symbol code.

It’s alot like how one can talk to someone using symbol language, like “Eye Heart U” (with the respective symbols to indicate pronunciation) to say “I Love You”, or how keys on a game controller are marked with symbols to indicate the order by which the next secret move of your character will take place.

The downside of creating an entire computer language out of graphical symbols is that one must be familiar with what each symbol may indicate as an instruction in the greater source code. Does it indicate a single equivalent of a human language word, or does it indicate an entire idea that may be represented by a thousand human language words, such as the code template button idea that I suggested earlier?

Finally, a third idea to “create” something that can be installed on any iPhone OS device or within any pertinent iteration of an iPhone OS application as a “script” in order to render a same or similar action would be to “record” a workflow. This could be determined in a hypothetical “file type options” dialog box.

This would work by asking you exactly what you want to do with a file type or file tagged with a keyword: do you want to save an archive of each file at this time every few hours/days, and do you then want to publish your file’s stats on a local or external database?

However, this approach would demand that the application or OS provides as many properly laid out options as can be thought possible within an application. I’d favor the previous two anyway.

Anyway, just a few ideas on the perfect touchscreen mobile IDE on the iPhone OS.

About the FSF’s stance on the iPhone and Embedded Mobile software

An FSF member has published his stance on the iPhone in the wake of the 3G release.

I’ll say that I agree with johns on his points concerning the iPhone blocking free software and free media; also I could see the problem with a phone that continues to provide feedback for proprietary mobile phone/navigation networks even when you turn it off.

However, then the article offers the FreeRunner device (which has the OpenMoko Linux distribution pre-installed) as an alternative to the iPhone.

Now, the OpenMoko platform has, for the last two years, been extolled on news websites as the quintessential free software smartphone OS, but I wonder about it and its proponents.

Primarily, I wonder why an “embedded” Linux distribution should be the poster child for the free software movement’s somewhat-consistent principle-based opposition to the devices preinstalled with the iPhone OS, considering that most times when we read about some GPL violation being taken to court by the Software Freedom Law Center, it is usually concerning some GPL’ed software being “embedded” into the hardware without full compliance with the GPL’s letter. When Richard Stallman talks about “Tivoization”, he is specifically talking about Embedded Linux being “too” locked down to comply fully with the GPL’s spirit.

Plus, when it was being extolled as the geek’s ideal mobile OS on Digg, one of the primary reasons for why the OpenMoko was extolled in such a way was because it supposedly followed the “PC” model where software and hardware modifications and extensions were allowable and addable.

So, if OpenMoko Linux is “more” extensible than the iPhone OS, then does it remain an “Embedded Linux” or does it become a simply “Mobile Linux” of the likes of Ubuntu MID?

And if the iPhone OS is “embedded” in how it supports SIM cards which are proprietary to the carrier (in this case, AT&T), then why should the FSF endorse an embedded Linux device that supports the same for a different carrier? Can you say “four more years“?

Instead, I wish that the FSF would endorse the development of PMP OSes that could compete with the iPhone OS via the iPod touch rather than via the iPhone.

Such PMP OSes like the iPod touch installation of the iPhone OS would be able to install free software, play/edit/distribute free media, and not give off a homing beacon that is proprietary to some carrier’s network.

Plus, it would (ideally) allow you to sync with any desktop client on any operating system of your choosing, not restrict you to syncing one library at a time, and even let you download files from the Internet from within the device’s browser.

Finally, the purpose of the speaker and receiver on the free software PMP OS would be to talk through open IM-based VOIP protocols, record conversations, and play sound out loud if the user chooses such an option.

It would essentially bypass the current focus of the majority of smartphone OSes on connecting with “data providers” and carriers, and give computing platforms to those who may not desire a laptop or anything bigger but aren’t as wild about getting cell phones (like myself).

At least, until cellular data plans are as cheap and as fast as a home Cable Internet plan (which won’t happen anytime soon).

A PC maker should buy a smartphone maker

Seriously, if Apple’s implementation of the smartphone can spark a flurry of similar implementations by traditional phone makers, then I think that one of the major PC makers (NO, not Dell or HP) should buy one of the smartphone makers.

Why?

Because a PC maker could envision a smartphone as being more of a smaller-sized laptop, and could both design and build it – hardware-wise – to handle alot of what is often left to laptops/notebooks to handle.

This would take the smartphone from being a simple “phone + email and text” to a simple “mobile desktop computer + phone;” clearly, the smartphone makers are focusing on the former, while Apple, a desktop maker, is focusing on the latter.

Idea: Icons as typed symbols for programming on multitouch handhelds

OK, as I’ve probably noted before, multitouch handhelds (or handhelds in general) aren’t built for serious text input. And I’ve written before that such limitations should not preclude the creation of programming environments from within the devices.

So since then, I’ve been thinking about what could be done for the creation of such environments which won’t require the serious text input that is common in older computing form factors (desktop, server, etc.). And I think I may have found another preliminary solution:

Replacement of text words with graphic symbols or icons.

This means that instead of typing text words by the letter or key in order to construct a single syntax, one could simply create a string of graphic symbols which are interpreted by the handheld to mean a string of words and text symbols.

Preferably, the symbols would be rendered in SVG so that they could contain the data that would be used by the runtime (the web browser?) for interpretation, while providing customization abilities to those who may want to create similar yet distinctive symbols in order to symbolize a level of critical intensity (say, a yellow star as opposed to a red star).

Such usage of SVG (or, if you prefer, PNG) icons as a replacement for text could also be enhanced (and, hopefully, understood) with animated icons and even sound-embedding in the icons.

Now…what applications could be created with this that isn’t already creatable with a desktop computer?

I’m still thinking about that…

Visual programming on (and for) multitouch handhelds

I wonder if anyone who’s been following the iPhone, iPod touch, Neo1973 or any other multitouch handheld has considered the possibility of visual programming for these devices?

Since the devices, by nature, aren’t built to be the most adept typing machines (that goes for any mobile, since you can only use one of each finger at a time to type text on it), they are apparently not the best devices on which to create an application with a typed programming language (even the JavaScript that is used for any third-party webapps on the iPhone/iTouch). They are, however, *intensely* graphical, as the screens of these devices, which are used to display the information that is received or generated by the OS, tend to encompass the majority of the device’s front-end.

So if the combination of a graphical interface with an all-fingers interaction method is our only way to make sufficient use of these devices, then what about the applications which could be installed on these devices (the jailbreaking of Apple’s devices is another story)? At the moment, most applications for mobile devices, multitouch or no, are created trough the use of the keyboards for desktop computers, and are created in a variety of programming or scripting languages (sometimes with interfaces which make use of markup, stylesheets and vector graphics) which have to be typed gratuitously and fluidly.

But what if one doesn’t have a desktop or laptop device available, but has a multitouch handheld to, well, handle? What if the user feels like creating an application that isn’t already available on the device, like a plugin for a built-in audio player interface?

I think that, in this case, an on-device visual programming environment, one that is made for multitouch interaction, may be the best solution for creating applications on the device.

Since visual programming is, from my supposition, less driven by the keyboard and more driven by the mouse on a desktop computer (drag n’ drop and all that), then such a programming environment form could possibly be easily driven by fingers as well. This would make it extremely easy for the users of these devices to create installable programs and applications from within the devices rather than from without.

Now if only Firefox was ported to the iPhone so we could try this theory out.

Demo of visual programming, using Quartz Composer:

Idea: wireless “dumb terminal” touchscreen display for smartphones

OK, just thought about this last night:

I notice that most smartphones, including the multitouch fullscreen ones like the iPhone and Neo1973, are built as all-in-ones, with the hard drive and processing unit placed directly behind the display.

Then I saw the video of the most recent Macworld keynote by Steve Jobs last night, in which he introduced the MacBook Air. I had already read from Digg about how this new notebook computer, in order to accomplish one of the thinner laptop form factors, sacrificed such long-standing laptop components such as the optical disc (CD/DVD) drive, replacing it with an external optical disc drive that streams optical media data (even DVD-borne software titles such as Office ’08 for Mac or OS X 10.5) to the MacBook Air, where such data can then be copied or installed to the hard drive. This essentially rendered the MacBook Air, in Jobs’ words, a “wireless machine”; this may suit Apple’s modus operandi, which is directed more to digitally-transmitted data (iTunes) than to hard-based data on CDs, DVDs, or Blu-Ray/HD-DVD.

==============

Now, the following is becoming increasingly true:

The more mobile and free-wheeling the device, the smaller the form factor, the greater the network dependence, and the greater the need for interaction with the display.

The MacBook Air, the iPhone, and the iPod touch are all Apple’s mobile “wireless machines”. The software in these devices rely upon a wireless, streaming network in order for several applications to function correctly, and the interfaces use a greater immersion of the finger for various tasks, whether its widening a photo or dialing a phone number.

However, in the mobile industry (including Apple’s mobile products), most of the components (CPU, hard drive, etc.) are tied underneath the display (or, if its not multitouch, then both the screen and buttons). The same goes for other multitouch computer devices, such as the Microsoft Surface which is due for sale by at least 2009.

But what if the multitouch display could be used for the same communication purposes as your average smartphone without it being welded or tied to the computer that holds the hard drive and electronics?

==============

My idea is what I call “the commoditization of handhelds and other mobile computer devices”.

What this will allow is for someone to use one or more displays for a single, more stationary phone device, and for someone to cheaply upgrade their stationary phone device with greater hardware enhancements than would be feasible with the phone device tied directly to the display.

The phone device will “talk” wirelessly with the portable display, which doesn’t have a hard drive and very few electronic components except for those which allow for the display itself to function at all. The display will recognize any touch motions, which are then sent back to the stationary smartphone device where the operating system is stored. The smartphone is also wirelessly connected to the Internet though a wifi router.

This will be a more cost-effective approach to the creation of smartphones, as it will let both the computer and the portable multitouch display be true to the concerns within either components’ own domain.

This also allows for greater innovations within the display field that will eventually make it to the mobile market. This is one example: