20101216

The Future of Android and the Chrome OS

Apparently I'm not the only one to have some innovating thoughts about Android and the Chrome OS. Slashdot has a story that mentions how the two should be merged in the near future. Furthermore, Paul Buchheit, the creator of Gmail, also seems to agree. I'm still a little bit hesitant to disclose all of the technical details publicly, but it would seem that Google is starting to take some steps in the right direction. My plan essentially boils down to taking the Android userspace back to the desktop and incorporating an Android App Screen, much like what Apple showcased at their Back to the Mac event, but also modifying apps so that they are literally write-once, run-anywhere - on any architecture. Incidentally, it would also enable users to run any of the tens of thousands of amazing open-source applications already available, and potentially tie in a revenue system for open-source projects via the Android app store.

Perhaps it was slightly selfish to come up with these ideas - I really just wanted to beautify Linux in both software and hardware, and make the user experience just as enjoyable as it is on a Mac (minus the annoying hard-coded settings). Could 2011 finally be the Year of the Linux Desktop?

20101209

Oracle vs. Google, Apache Resigns from JCP

Now that the Apache Software Foundation has officially resigned from the Java Community Process, I think it's fair to say that Oracle has passively waged a war on any entity pairing the terms "open" and "java".

Most people are unaware at just how large and far-reaching of an impact this has. What it boils down to is this

  • Oracle is unwilling to participate responsibly for the community revision and release process that they wholeheartedly bought into with their acquisition of Sun Microsystems
  • thus, anyone wanting to use a certified Java environment to run their code must obtain that Java environment from Oracle
  • therefore, Java, as a specification, will be 100% controlled, owned, and steered by Oracle
This effects everything from the software you can run on your mobile phone, to the software that you can run on your corporate servers - so yea, it's pretty far-reaching. Furthermore, there are probably millions of companies that rely on (open source!) Java technology to do their day-to-day business whose livelihoods are now threatened.

Oracle's only "open source" offering of Java, is OpenJDK, which is completely inadequate in MOST of the cases where I would ever use Java. Let me elaborate. 

I use Java to program for two platforms mainly. The first is a barebones Linux system running on an ARMv4T processor with 32MB of storage. I actually tried to run OpenJDK on it over an NFS root - let me tell you how much of a freaking joke that was. OpenJDK used over 110MB of storage space. My preferred JVM, on the other hand, is JamVM paired with the GNU Classpath, which can be tuned to use as little as 7MB of storage. OpenJDK is bloated, and I'm putting that as euphemistically as I can. OpenJDK takes a century to do anything on this platform. JamVM on the other hand is practically like lightning. In short, OpenJDK is only for desktops and servers. I've never even attempted to use J2ME due to its long list of shortcomings (just google j2me shortcomings, e.g. this).

My second platform for programming in Java is Android, which (openly and welcomely) borrows its Java base from the Apache Harmony Project. I think Harmony has some serious potential on ARM, considering what I've seen so far with Android. Anyone who knows me, knows that I like Android and I think it's a great piece of work. I have personally hacked Android onto several devices, and continually find that it's doing great things with both the native code and Dalvik / Java. One could arguably say, that Android was the best thing to happen to Java in the last decade. Oracle is suing Google, claiming that Google had copied code directly from them (or Sun) somehow, and put it right into the Apache Harmony Project. This is completely ridiculous because Harmony was, in fact, created before Sun made any of its source open. 

Oracle also owns a few patents covering various 'inventions' in the form of software (originally filed by Sun) and they're incorporating claims of patent violation in to the Android / Google lawsuit. Let me be (hopefully not the first) to reveal something here. Sun's original strategy, by filing these patents, was to prevent any kind of patent-related lawsuit that might be inflicted upon Java or any of its users. Indeed, the Sun patents were only originally intended to be defensive in nature. Oracle has turned that around 180 degrees and started using those patents offensively to sue companies that actually do innovative things with Java.

The state of Oracle vs. Google puts Google in a really tight position. I agree (and so do several now-resigned members of the JCP EC) that Oracle has done the worst possible thing that they can with Java as a platform and specification. Oracle's position threatens people from modifying and redistributing Java for whatever purpose they want (source required), which is the fundamental attribute for any piece of open source software according to the definition by Bruce Perens. This means, that Oracle's "open source" offering of OpenJDK might as well be binary-only for any meaningful purpose. 

Now, the Android community, who has had enough vision to do something genuinely new and useful with Java, are relying on a possible court ruling that Oracle's software patents are invalid. In my opinion, they are, but that's only because I think all software patents are invalid. The USPTO, on the other hand, has traditionally given out software patents like business cards. The USPTO is getting more reasonable, and is even revoking software patents in some cases but I feel that it's a dangerous position at large for Google, the Open Handset Alliance, and the Android developer community.

I recently submitted several ideas to Google that should appease both Oracle (from a legal standpoint) and Google in this whole fiasco while simultaneously injecting Android with enough adrenaline, marketability, and sex-appeal to push it even past OS X, the iPhone, and the iPad. I've put in a fair amount of the technical legwork in my free time to see just how realizable this is - and it is very realizable. I've communicated some of these ideas with various developers of original software components and have had very positive feedback. It is doable. Google even responded to me but it's been a couple of weeks and I haven't heard back from them for a while. These suggestions could literally be the best thing to happen to Linux and Android in a long time. I realize that Google is busy (trust me, I understand what its like), but I do hope they reply. I freely offered these suggestions to Google, purely for the sake of securing Android w.r.t. Java. Whether they wanted to give me a job for implementing all of it was irrelevant - I keep myself busy doing the things that I love regardless of who I'm working for.

The solution that I presented to Google was only good for temporary purposes - it still doesn't address the issue of Java as an open specification or Java as an open piece of software. For those reasons, I was saddened today by the announcement of the ASF because it only confirmed Oracle's passive-aggressive position... and potentially the end of Java as a good choice for a programming language.

Edit: I just thought I would point out the self-contradiction of Oracle as well; if you make e.g. OpenJDK available to the public under the GPL, then how can you say that you withhold the right for people to modify it and redistribute it when they provide the source. In this sense, there could be absolutely no wrong done by Google considering they didn't even base their code on OpenJDK but rather on Harmony.

20101106

MultiCore Threading

This post is partially in response to a question somebody asked me recently about threading on ARM Cortex-A9 systems. I was asked whether or not that just by creating a several new "threads", whether the threads will "automatically" run at the same time on separate cores without any operating system or system library interaction. The short answer is no.

The long answer begins with a 1-minute history of computer architecture. A processor generally has something called an instruction pipeline. For scalar architectures, this meant that only one instruction (read: hardware function) could ever be executed at any given time. Some clever hardware engineers determined that this was not utilizing the hardware as effectively as possible, and so they came up with the idea of a pipeline, or the superscalar architecture, which allows more than one hardware function to be executed at a time. Generally speaking, this meant that if the 'add' function was being used at one point in time, the 'memory fetch' function could also be used at the same point of time. This introduced something they industry termed a 'data hazard'. For example, if a certain add operation depended on the result of a memory fetch operation, then the add function would produce unanticipated results if the memory fetch operation had not completed in time. The first solution to this problem was to introduce stalls in the pipeline, which were (and still are) very bad. The second solution (really an improvement on the first solution) was to add another hardware unit in to the chip that would re-order the instructions before sending them down the pipeline in order to minimize pipeline stalls due to data hazards. That hardware unit was called an out-of-order execution unit. Instruction scheduling can actually be done in software as well, by the compiler and linker, but since this only allows off-line instruction scheduling, it cannot account for asynchronous events that are only stochastically predictable.  This is where the branch prediction unit comes into play, but I'll omit that for brevity. So far, only instruction-level parallelism has been covered.

The Cortex-A9 Pipeline
The Cortex-A9 MPCore
Now, most manufacturers realized that it would be best to let uniprocessor code execute on multicore systems so that programmers and compiler designers wouldn't have nervous breakdowns trying to optimize their code for the googles of system permutations that would be in existence (all of them would be vector processors). Thanks to all manufacturers for that one. ARM is no different, for the Cortex-A9 family of processors still belongs to the ARMv7a instruction set architecture. However, the general decision to run uniprocessor code on multicore systems necessitated the use of software entities to actually manage and, really schedule, when and where that code would be executed.

Getting back to the original question, it's important to consider what a 'thread' actually is. A thread is a pure software abstraction for a logical sequence of events. Threads are often associated with a priority, a state (e.g. ready, waiting, zombie), and an instruction pointer. As for the threading abstraction (e.g. POSIX threads), they must a) introduce data protection primitives, as well as mechanisms to b) wait until data is not in use and c) signal when data is no longer in use. Usually, the operating system deals with scheduling which threads are running at any given time, although it isn't that hard to do this without an operating system. The fundamental method of synchronizing threads is via shared sections of memory and atomic processor instructions. The thread scheduler uses timer-generated hardware interrupts to periodically evaluate the state of all threads, and then schedule code (i.e. determine the next branch target) for 1 to N cores. In the case of a uniprocessor system, this means that the scheduler itself is being swapped in and out after a certain number of time slices, where each time slice is occupied with a thread based on priority, state, etc. The number of cores available at any given time is also controllable with software, since cores can be dynamically powered off to save energy. This is something that the thread scheduler must take into account.

As for initialization of each core, typically a single core would be activated at power on, then as the operating system (or main binary) launches, a threading manager would also be launched. The threading manager would initialize and create descriptive data structures for the remaining cores on the system, and so on. As each core runs, it literally operates in a loop; 'jump' to an instruction and start executing, or go to sleep if not needed; then do the same thing again. The details on power up, particularly in the case of the ARM architecture, are very manufacturer dependent, since e.g. an OMAP MPCore implementation can have several physical differences and register locations than, e.g. an MSM MPCore implementation.

In summary, sure, it's easy to have several cores running at the same time, but getting them to coordinate shared data properly (i.e. run threads with shared data sections) requires that the concept of parallel execution be built into an application or library (which is not always easy). For a simple example, assume a library allocates 512 MB (i.e. 2**19 bytes) of memory, sets it all to zero, and then deallocates the memory. Would it run any faster on a multi-core system than it would on a single-core system? Absolutely not, because the processor cores do not follow the programming methodology of DWIMNWIS (unless it has a pretty advanced hardware rescheduler).

If I modify the library to first query a threading library to find the number of logical cores, partition my buffer into N sections, and then create several threads that are aware of their own partition boundaries, then I can expect my library to perform faster by a factor according to Amdahl's law: S=1/(F+(1-F)/N). In this case, since the amount of the problem that is not parallelizable is 0%, F=0, and the speedup is S=N. However, even in the case where a thread scheduler is present, there is still variability about where the code will actually run - for example, all threads could even be run on a single core, rather than distributed among them all.

Amdahl's Law: S = 1 / ( F + (1-F)/N )
Oddly enough, even though I have been writing threaded code for about a decade, I still have a pretty antiquated workstation on my desk (by today's standards). Indeed, my workstation is a single-core pentium-m laptop. It is a surprising hunk of garbage that never decides to finally die, although its been close on several occasions. In any case, I hope to have that upgraded soon to a quad-core Intel i7 machine, so that I can have 8 logical threads to speed up my ultimate goal of world domination.


Also, I'm looking forward to receiving a PandaBoard shortly, with an OMAP4440 dual-core Cortex-A9 chip. This will give me incentive to do some SMP performance tweaks on some of the NEON-enabled software I've written lately (e.g. FFTW).

20101001

Smoking Laws in Ontario, Quebec

On a different note, I thought that I would talke about an idea I had for an amendment to the existing Ontario bylaws that prohibit smoking in public buildings, places or private automobiles with a child present. It is illegal in most municipalities in Ontario. It's getting better in Quebec, but Ontario is a bit further ahead. The biggest problem, in my opinion, is smoking in the home around children. Certainly, there have already been numerous studies that indicate without a doubt that smoking in the home around children is harmful, but why isn't it illegal? I suppose that one could argue that it interferes with a persons rights... to smoke... and ... err... slowly poison their children (!?). Ridiculous, right? In some cases such as mine, it isn't even a volountary decision, and I would really prefer not to have any member of my family, but especially my son, slowly poisoned.

In Montreal, it's common to rent and live in small buildings where each floor is a residence in itself. In some older buildings, smoke actually travels very easily through areas of flooring (or ceiling), through ventilation, or even through plumbing! I live in such a dwelling, with my 2 1/2 year old son and his lovely mother. And the tenant that rents the residence below us, refuses to smoke outside, even when the weather is clearly warm enough to do so comfortably. The second and third-hand comes up into our apartment through various sections in our fairly old 3-story apartment building - we live on the top floor - and at times it is so dense that we can actually see it in the air. Clearly in those circumstances, we open the front and back doors to our balconies, and use a fan to properly ventilate the apartment. Actually, we need to use 3 fans, since the apartment is rather long and old fashioned, and there are no other sources of ventilation aside from the front and back balcony doors. However, we cannot leave our doors open all day and all night, since it would basically be equivalent to inviting people to steal from us, and we would freeze to death in the winter. Not to mention, that it probably has a measurable effect on our heating & electricity bill. So when we come home, we often find that the apartment smells awful. To be specific, it smells like a mixture of cigarette smoke and fabreeze, which our neighbor downstairs thoughfully uses to mask the odor. Thanks for the thought, but the fabreeze really doesn't make it smell any better, and it certainly isn't reducing the health risks to my son.

I find it particularly bothersome when I hear my son cough in the middle of the night and I go into the kitchen to get him a glass of water, only to be greeted by a cloud of carcinogens. It's really no wonder why he's coughing. 

It's most certainly not healthy for our son, or us, at any time of the day or night. I've asked our neighbor politely to only smoke on the balcony, making specific mention about my concern for my son, and I've even asked the landlord to speak with our neighbor, but this hasn't made a difference and has only made our neighborly relationship less pleasant. Actually, our landlord lives in the building too, and he shares the exact same cloud of carcinogens coming up through the flooring. He also said it's intolerable at times, but unless there's a law about it, there's really not much more he can do. Moving is not an option. We really like our place, and from the landlord's perspective, he's stuck - this building is his property, business, and home. We live near a great park and we get a great sunset on our balcony. The kids in the neighborhood all seem happy, it's minutes from downtown and just far enough to be not down-towny, and ... well, there's a really great park across the street!

When will smoking laws catch up with common sense!?

I think this is a reasonable suggestion to the committee responsible for making smoking laws. Although the residences in question are not public buildings, there are direct effects on the health and safety of members of the public. It's basically the same logic that requires drivers to drive slowly in an area where a deaf child lives, even if it's not their child. Logical, right? Complaints would probably be followed by a building inspection and a mandatory no-smoking sign. I'm just hoping that maybe someone in the Ontario & Quebec governments will read this post and decide to finally take some action so that smoking laws catch up with common sense.

Will NVidia Follow Suit of AMD's Doc Disclosure?

Recently, AMD committed to releasing technical documents for their GPUs in order to assist open-source software developers to write better 2D and 3D graphics drivers. AMD actually followed through with that committment as well, and you can find the technical documentation here, if you're interested. Thanks AMD!

Although AMD will continue to release binary-x86* Linux drivers, the release of their chipset documentation (actually for R300 R500 and R600 series), is intended to improve the 'out-of-the-box' experience for PC users.

AMDs chips are entirely x86, from what I can tell, although I think i remember a rumor that they licensed some of their graphics technology to Apple for the chips that went into the iPhone, iPad, and iPod Touch. Aside from that AMD has no (publicly visible) vested interest in having graphics drivers that are architecture independent.

On the other hand, NVidia actually purchased an ARM License and produces their own Cortex-A8 and Cortex-A9 silicon with integrated NVidia graphics (Tegra, Tegra2), so they have both an x86 and an ARM presence now. Not only that, but NVidia continues to be the sole surviving GPU company, since AMD bought out ATI.

However, NVidia seems to be encountering production delays trying to get (Linux-based?) Tegra2 products to market. I can only assume that they aren't having silicon issues[1], so it really must be an issue getting the hardware to work well. They have opened up their Tegra2 site to Linux developers, offering a development board, source code, and binaries. However, I'm really left wondering if they could also benefit from disclosing some documentation of their graphics cores and perhaps the Tegra2 TRM, so that the next generation of NVidia-powered mobile devices would also provide an excellent 'out-of-the-box' 2D and 3D user experience.

Will NVidia follow suit with NDA-free documentation disclosure? Lets hope so... it would definitely be enough convincing to get me to buy a Tegra2-based device.

[1] as in: whoops! this graphics subsystem only processes data at 1/2 the necessary rate! .... ahem... maybe you know who I'm talking about

20100613

Linux on the Nokia N8?

Incidentally, if there is anyone interested in hacking Linux (read Android, Angstrom, Gentoo, etc) onto the Nokia N8, please leave a comment below. Honestly, this is probably the best hardware I've yet seen (lacking noise-cancellation) for a potential hacker-friendly device. I am making the assumption now that Nokia used an OMAP3 with this device, which is probably the best SOC (in terms of Linux-hacker-friendliness) to date with freely available documentation.

Pre-Departure Updates

That last few weeks have been insanely busy for me.

First, I sold all of my furniture and said goodbye to my former apartment in Kiel, since I'll be leaving for Montreal on Tuesday (yaaay!!). Since then, I've been couch-surfing at a friend's apartment down the street. Moving out was a huge undertaking, and I'm quite relieved that its over. There's something extremely liberating in a sort of Zen-Buddhist way about living out of a backpack.

Also, I've been putting in crazy amounts of overtime on my thesis project, which is coming along spectacularly. I've been meaning to write a blog post about it, without giving away too many things prematurely (call me superstitious, but I feel it could jinx me in the end). All I can really say at the moment, is that it's really pushing the physical limits, and that the antenna actually depends on materials being in the near-field. I will allow myself to expand on this point alone for clarification. For most people with any background in physics or engineering, it's common knowledge that EM wave propagation slows down in matter. The wave propagation velocity is equal to the speed of light in free space, but in any material with a relative permittivity greater than 1, the propagation velocity decreases. However, since the measure of time remains constant, the frequency remains constant. Subsequently, in order to maintain equality, the wavelength (L) shrinks according to the equation L = cr / f. A really fantastic consequence of this (antennas not-in-free-space), is that an antenna tuned to a specific frequency surrounded by a given material is often a significant fraction smaller than the equivalent antenna in free space. The resonance remains the same regardless of the angle of incidence (although directional gain is clearly affected). Half-space (or really multi-space) simulations of my antenna design (thus far) have allowed me to reduce the antenna size by a factor greater than 2! Without this near-field effect, it would literally be impossible to create an antenna that resonated in my required frequency range (the lower frequency bound being inversely proportional to the antenna dimensions - the limiting factor). This last week, I've been working on accurate 3D modelling of a planar antenna projected onto the surface of a half-ellipsoid (in order to approximate the inner curvature of a prosthetic eye), which will be the final addition to my simulation. I will then need to do some fine-tuning of the antenna dimensions (this will likely be some sort of constrained numerical optimization, perhaps MMSE), and finally I'll be able to build a physical prototype. Its safe to say though, that it has been far from an easy task. Limitations of our FDTD software and API did pose a major hurdle at one point. I've been doing a lot of the 3D modelling lately in Matlab (with its severely limited Dulauney triangulation capabilities) but I will eventually (or rather in the next week or two) have to write a Dulauney triangulation module in pure Python to interface with the FDTD API. I'm not a huge fan of Python, but I do what I must. In short, I really think that this antenna will be the first of its kind. I can't imagine that anyone has ever created such a specific design, just as the Eyeborg project is equally the first of its kind. The remaining things will be a bit of an exercise in reverse engineering, since I received absolutely zero assistance (so far) from WUSB transceiver chip vendors. I'll also need to improve the state of the Linux WUSB stack. Hopefully when chip vendors see a demonstrated prototype they'll be more inclined to cooperate with us on the Eyeborg design.

My GSOC project was on a bit of a hold this week, since it was my last week in Germany and I needed to focus on thesis work before my flight on the 15th. However, tonight I should be able to accomplish the tasks that I set for myself last monday. Keep an eye on my GSOC blog tomorrow for my weekly report.

Lastly, I leave you with a token of motivational music that should indicate my my overly-caffeinated state of late. Major thanks go to the countries of Ethiopia (for producing such great coffee), and Austria (for inventing RedBull).