DyGraphs Pie Chart Plotter

DyGraphs is a decent Javascript library to plot time series.

I chose this library long time ago mainly due to its small footprint of 123530 for dygraph.2.0.0.min.js

One of the things it allows you to do now is to add a different plotter algorithm to plot data. One such example can be found on the demo page is that of a BarChart plotter. If you look at the code it is a fairly small addition.

One of the possible plots missing though is a PieChart. It happened that I needed a PieChart for my project and I did not want to switch to E.g. ChartJS [ Release 2.5.0 ] so I wrote my own little PieChart function for DyGraph.

      function pieChartPlotter ( e ) {
        var ctx  = e.drawingContext;
        var self = e.dygraph.user_attrs_.myCtx;
        var itm, nme, data = self._pieData;
        if ( ! data )  {
          var t, i, y, all, total = 0;
          data = {};
          all = e.allSeriesPoints; // array of array
          for ( t=0; t<all.length ; t++ )  {
            y = 0;
            itm  = all[t];
            nme  = itm[0].name;
            for ( i=0; i<itm.length; i++ )
              y += itm[i].yval;
            total += y;
            data[nme] = { color : null, y : y };
          data.total = total;
          self._pieData = data;
        if ( data[e.setName] )
             data[e.setName].color = e.color;
        var delta = ctx.canvas.width > ctx.canvas.height ? ctx.canvas.height : ctx.canvas.width;
        var center= parseInt ( delta / 2, 10 );
        var lastend = 0;
        ctx.clearRect ( 0, 0, ctx.canvas.width, ctx.canvas.height );
        for ( var name in data )  {
          itm = data[name];
          if ( self._highlighted === name )
            ctx.fillStyle = "#FF8844";
            ctx.fillStyle = itm.color === null ? "#888888" : itm.color;
          ctx.beginPath ( );
          ctx.moveTo ( ctx.canvas.width/2, ctx.canvas.height/2 );
          ctx.arc ( ctx.canvas.width/2, ctx.canvas.height/2, center/2, lastend, lastend + ( Math.PI * 2 * ( itm.y / data.total ) ), false );
          ctx.lineTo ( ctx.canvas.width/2, ctx.canvas.height/2 );
          ctx.fill ( );
          lastend += Math.PI * 2 * ( itm.y / data.total );

The one thing you will see in this code is that I calculate the required PieChart data once and then check for its existance each time I enter this function. This is requried beause DyGraph does currently not call the plotter function in a context but rather in the global browser context ( I.e. the this object is the browser Window ).

So instead I ‘added’ ( read hacked ) myCtx to the dygraph – plotter options to gain accss to my local JavaScript object where I buffer the _pieData.

While this may not be the nicest pie chart around, it is a small, basic function which can be expanded on fairly easily.

Posted in Uncategorized | 2 Comments

Amazing Tech, Stupid Copyright Application

About 8 months ago I created a video with my son walking through a shopping center somewhere in virginia. I just received my 220 degree fisheye camera, and played with it recording our walk along the isles.

This was uploaded before I had 360Tube to properly convert the video into a equi-rectangular video. So I looked at my video collection and I found a copyright notice on this video, which surprised me.

She Says - Howie Day
Sound recording
0:35 - 2:28

So the amazing thing is how accurate Googles’ technology can pick up background sound from our stroll through the mall with our voices in the foreground, some additional speaker announcements, and the overall poor quality of the pieces of music which I can barely hear in the background.

A marvel of technology.

Now think about this for a second thought. If we can auto-detect music in the background noise through this amazing technology, then why don’t we have flying cars by now ?

Also think about the ridiculousness of copyright enforcement in this case.

I am all about protecting one’s IP through copyrights  ( not patents ) however would you think that someone would actually watch this video only to enjoy the song playing in the background while we walk through a mall ?

I don’t think so. As a matter of fact, I believe the music industry should pay me product placement fees, and we could use the same technology to get them to pony up.

Posted in Uncategorized | Leave a comment

Android getContentResolver or the developer-crossword-puzzle

So I have a question to whoever though that the Android API getContentResolver is a good idea and easily usable by developers.

What the heck were you thinking dude ?

I have spent two full days trying to figure this one out and it still escapes mas as to how I can share files between applications.

I have clicked and read all of the first two pages of my google searches in various forms to no avail. and thus I conclude that this thing that should be as simple as

  String mimeType = MimeTypes.getMimeType ( file.getName ( ) );
  Uri uri = FileProvider.shareFile ( file );
  intent.setDataAndType ( uri, mimeType );
  startActivity ( intent );

and on the receiving side

  String path = context.getContentResolver ( ).getFilePath ( uri );

is marred in a lot of confusing obstructions and hard to understand reasoning. Take for example the code you will find everywhere online

 cursor = context.getContentResolver().query(uri, projection, selection, selectionArgs, null);

Then you find references to query, tables records rows, and columns. That may be great for sharing a DB however it does not resolve file sharing, however that is the code you have to fight with.

Of course cursor is mostly null and in cases it is not null it will throw an exception when you try to execute the above query.

Now lets assume you get through this and you have a valid cursor object. now this piece of code fails you

  if ( cursor.moveToFirst ( ) )
    path = cursor.getString ( 0 );

So what does this all mean ? For me it resulted in two very frustrating days of lost productivity and head-scratching why such a simple task is so unnecessarily complex.
Google: There is no excuse for this complexity.

Here is the final code to retrieve the file

  if ( "content".equalsIgnoreCase ( uri.getScheme ( ) ) ) {
    Cursor cursor = null;
    try {
      String[] proj = { MediaStore.Images.Media.DATA };
      try {
        cursor = context.getContentResolver().query ( uri, proj, null, null, null );
        if ( cursor != null ) {
          try {
            int column_index = cursor.getColumnIndexOrThrow ( MediaStore.Images.Media.DATA );
            if (cursor.moveToFirst ( ) )
              path = cursor.getString ( column_index );
          catch ( IllegalArgumentException e )  {
            e.printStackTrace ( );
      catch ( Exception e )  {
        e.printStackTrace ( );
    finally {
      if ( cursor != null ) {
           cursor.close ( );
    String str = path;
    if (str.indexOf("/file/") == 0)
        str = str.replace("/file", "");
    return str;
Posted in Uncategorized | Leave a comment

EleCam 360 vs PanoView 360

I thought I share my expertise using both cameras on our family trip to Florida.


There is definitely a huge advantage to using the EleCam 360 because of its two lenses and the coverage of the full 360 degrees of view.


The PanoView sports only one lense which covers a range of 220 degrees visible range. However it delivers a higher resolution than the EleCams sensor at 2448×2448 video and 4096×4096 still image resolution.


The EleCam has a combined resolution of 1920×960 for video and 308×1504 for image resolution.

You have to keep in mind though that the resolution is that of a rectangle with the round image in it and that this tradition is the actual radius of the image for the PanoView and two times the radius for the EleCam. The final resolution you will see on YouTube depends on you rendering settings.

Utilizing 360Tube you have three possible resolutions to choose from.


512×256 can be thought of as a quick preview rendering and should not be used as the final resolution for YouTube. The results would look very grainy and not sharp.

1024×512 is a nice medium if you actually render the video on your phone. While the final result will still be fuzzy, it allows you to upload the video shortly after you create it using your 360 camera.

2048×1024 is the high res version which looks much nicer on YouTube but will take a long time to render on your phone.

Higher resolutions will be available on the pro version which will allow you to process the video on your computer instead of your phone. Using for example the Gear 360 will give you much higher source resolution and you will want to retain the quality all the way through to the YouTube video.

I have encountered multiple times a frozen EleCam where the only way to recover is to push the reset button which can only be achieved using a small metal pin you will need to stick into the tint pin hole. This has been impossible at times when your are it and about trying to shoot action videos.

The other way to reset is to let the battery drain but then you’re out of options anyways.

The PanoView seems much more mature in its stability as well as in its feature set. You can for example choose a time lapse video which is a lot of fun. It also came with a waterproof case which I have used to a large degree during our Beach vacations and during our visits to Aquatia.

Posted in Uncategorized | Tagged | Leave a comment

360Tube phase Aplha-Aplha

I have submitted the very first usable alpha version of 360Tube and I am now in the process to spiffy it up, work on usability, remove clutter, and try to avoid crashes.

One positive sign is that I am actually using the app myself quite frequent, when I have a new video recorded with my EleCam 360 which I want to upload to YouTube.

However the FileSelector was kind of sucky, so I thought I take a few hours to replace it with a better version. That was about two days ago. I have spend a lo of time trying the available open source FileSelector libraries from github. Unfortunately most of those offered little additional benefit over my current implementation and in a lot of cases I had to spend time to work on making it compile.

I finally came across the super-fine-nice-cool UltraExplorer.


It took me a couple of hours to compile it and made it work inside AndroidStudio however I am now at the point where I can add things together and integrate this into 360Tube. Unfortunately it does not currently support previews for videos so I will have to spend some more time adding this in.

Overall though I am happy with the progress. You can find a sample video from 360Tube here:

Posted in Uncategorized | Leave a comment

AstraNOS and where it is heading

I have recently been asked if the browser is the right environment to build on.

I think the browser is a great environment because it is available on every platform and does not require to do cross-platform compilation. Of course older browsers won’t work, and newer browsers may break existing features but such is life in the fast lane 🙂

My goal with AstraNOS is to provide a single sign on, multi device, shared social desktop environment with highly immersive and advanced capabilities.

I want to start by working on the basics which I will need which are every day apps such as Chat, e-mail, writing docs, spreadsheets etc.

The second stage would integrate a comprehensive 3D desktop environment. I think we have been using 2D for so long and I have had this vision about a true 3D environment for many years. I was waiting for some company to bring out a game changer, alas that never happened.

I am tired of waiting.

One piece which was required for this vision to come through and to truly work was something which just recently became available, the Leap Motion detector. The other things are voice control and gesture recognition combined with head tracking.


The 3D goggles are optional 🙂

I believe if you truly analyze how people communicate with each other, and you combine the above technologies with my #1 rule; “It MUST be obvious/simple to use”, you are onto something spectacular.

Now the Browser is a first step. An Attempt to see what is possible. It should provide the omnipotent flexibility to be running on any system anywhere. ThreeJS, WebWorkers, and asm.js should allow for awesome power and I believe Google is doing some skunk works on even faster programming environments for browsers.

The pieces which turn out to require more than a browser can be added through plugins, extensions or, through native apps which are linked against the browser code. The ultimate goal is to create somethign which runs on every platform including desktop, mobile, wearables.

I have seen a great many attempts in generating a 3D environment ( You may like http://hwahba.com/ibex/ ). I just don’t know how usable they truly are and I am sure these projects only speak to a fraction of the online populous.


I am fascinated with the tech presented in the Ironman movies, which is a combination of the above technologies, combined with a powerful AI which does take a lot of the guesswork away from handling a ‘Personal computer’. My vision revolves around creating something stunningly similar.


Posted in Uncategorized | 1 Comment

AstraNOS for the stars

So I have not written any update on my web desktop for a while. That was for good reason, as I was busy doing a batch of other things the past months.

However I have never give up on it and I am still using it heavily all the time to store pics, ideas, notes, videos and other things.

I recently fixed my AWS instance and re-enabled Conference, my WebRTC based video conferencing tool. Also since I always have a multitude of windows open I added a virtual desktop feature to the mix.

This slideshow requires JavaScript.

The number of virtual desktops is currently hardcoded to 4 however I believe that this will be plenty.

Posted in AstraNOS, Cloud Storage | Tagged | Leave a comment

Zotac MiniPC IE751 setup

As I mentioned in my previous post, I have replaced my aging Zotac miniPC ID41 with a newer version, the Zotac MiniPC EI751.

The ID41  has an Intel Atom D525 1.8 GHz Dual Core with 4 processing units, I gave it a 128GB SSD and 8GB of RAM.

My new power horse, the EI751 comes with Intel Core i7-5775R Quad Core CPU at 3.3GHz, the Iris Pro 6200 GPU chipset. I added a 500GB mSATA SSD, 16GB of 1600MHz RAM. For good measure I plugged my old 128GB SSD into the case as well so that I do now have 628 GB of HD space.


I encountered one issue though during the installation process. The spec for the Zotact for video read like this : “2 x DisplayPort 1.2: 3840×2160 @ 60Hz;
DVI-D: 1920×1080 @ 60Hz;”

While this is straight forward I did not realize the limitation of the DVI-D output to 1920×1080 until after I connected the system to my Dual link DVI monitor and found myself unable to scale to the full native resolution of 2560×1600.

So I had to spend some more $ to get an active converter from DisplayPort to Dual link DVI-D.


My OS of Choice is still OpenSuSE, and I am now off and running the latest version 42.2. which is a big upgrade, coming from my previous OpenSuSE version 13.2.


At this time I am now up and running and slowly converting my previous home directory over to the new system. The migration is very easy as the old SSD is internal, I do mostly just copy the files and directories over.

In my next blog post I will talk a bit about the performance of my new setup, and how it compares against the ‘not so weak’ ID41 dual core machine.


Posted in Uncategorized | Leave a comment

What is going on with Amazon

So I have been working on my Zotac ID41 box for the past 5 years and have been very happy with the hardware.


At the core this box has an Intel(R) Atom(TM) CPU D525 @ 1.80GHz. Since this was a bare bones system I went all out and gave it a nice 8GB of RAM, and a 128GB SSD.

Well that was then and by now this system is starting to show its age. So I thought I may want to update to the lighting-fast and still ultra-compact Zotac EI751 model.


Also a barebones system, It comes with a Intel Core i7-E775R Quad core CPU @ 3.3GHz and Iris Pro 6200 integrated graphics chip. The Dual DVI port allows me to hook it up to my 32″ screen and the 16GB RAM plus 512GB HD I have bought along would make for a great system for development, and Ai.

Note that I said ‘would’. Well it turns out I ordered from Amazon, twice, the same model from different vendors. Both times I have received a dud.

The first time around my order, which was a opened box, tested by an IT professional did not even power on. It seems the professional may be overpaid or a marketing illusion by “BuyVPC“.

The second time I was smarter and ordered from a larger company. Turns out “Amazon  Warehouse Deals” sent me the right box, the right computer cover but the wrong hardware.

I received an old Dual Core version of an old Zotac computer wrapped in the new box. Someone took the time to replace the cover on that machine with the EI751 and re-packaged it and sent it out.

I wonder how often those crooks get away with this, when you order a present and you send it to say your mom, will she note the difference ?

So now I am left with 16GB of SODIMM I can’t use, and 512GB HD I can’t use and have to wait for the refund to be send back ( again ) before I can search again for a i7 MiniPC.


After a lot of time searching around for comparable offers or machines, I ended up looking back to Amazon Warehouse Deals. I found that it is part of Amazon itself which improved my confidence to the point that I ordered the same part again ( they had two left ).

I received the PC on Thursday and it booted straight up in all of its 4 core ( 8 processing units with Hyper-threading ) beauty at 3.3GHz with 16GB RAM.

I now believe that there was a customer of Amazon who ordered the PC and then replaced the actual hardware and sent it back for a refund. Amazon then checked if it is in working condition but the tech guys did not validate the hardware specs wrapped it and offered it for a re-sale. I just hope that Amazon can get its money back from that person and give him/her a good slap on the wrist.




Posted in Uncategorized | Leave a comment

Gradle the cry-baby

So when I started this journey I was excited to learn new environments and to get my hands dirty with the Android studio 2.1.

I made fair progress until the point where I wanted to add support for native libraries to my project. 

Cross compiling the ffmpeg libraries took a bit of fiddeling, but I eventually got it to compile for all android architectures. The next small step though turned out to be mission impossible.

Adding ndk support for jni in android studio using the experimental plugin for Gradle. Quite a mouth full and honestly I would have preferred to not needing to learn all of the intrinsic details of the build system. I would have preferred to do serious programming instead.

I have literally spent a whole working week fighting build issues, and mind you I did make a lot of progress however every single approach I tried, every route I took ended in failure.

Each possible route costing me between 4h and two days. 

Cry baby

For example the refusal of the build system to build a module without test cases. Then the constant complaining about duplicate entries when trying to assemble the apk. Why is the linker/assembler unable to resolve the basic task of NOT including the same library more than once?

Add to this frustration a bewildering set of ever changing keywords and structs in/for the Gradle build system and you will understand why it takes weeks to get basic operations to work. Trying to google any information requires to pay close attention on the post date of the answer to avoid outdated information

I am a big believer in tools which support the developer. It seems though that you must study the Gradle system and know all of the thousands little details an keywords and their meaning before you can actually use it.

Why this pain for a basic use-case which goes just beyond the brim of Java is just mind boggling.

Fallback to progress.

Posted in Uncategorized | Leave a comment