Open sourcing auto-classify-images

Recently, I’ve been playing around with Machine Learning frameworks some more. In particular, I’ve been fascinated by the potential uses of on-device machine learning using Tensorflow Lite. After open sourcing my first Flutter project, I started playing around with the tflite Flutter package. It had a great example app that I was able to adapt to try out any tflite image classification model I could create.

So, I started playing around with different ideas for apps that would use on-device machine learning to provide utility to end-users. Unfortunately, as I’m sure many folks can relate to, simply coming up with a useful dataset has proven quite a challenge. However, over many iterations of trying out different data sets and different ways to build ML models for image classification, I ended up creating my own little framework that can easily adapt to modeling most simple image classification scenarios.

The GitHub repo auto-classify-images provides keras scripts, as well as a docker container, that handle most of the complicated process of installing software dependencies and setting up the software environment. The only real headache left for the users is getting their CUDA/NVIDIA drivers correctly installed.

Anyway, I’ll be posting more results from my experimentation as I can, but I wanted to give a quick shout out to my new open source project. Please check it out, play with the code, and let me know what you think!

GitHub auto-classify-images repo: https://github.com/dannydabbles/auto-classify-images

Joining the Android Developer Challenge

A few weeks ago, I was hungry for a new project to work on. I wanted something that would complement my day-to-day work on cross-platform React Native apps, but without having to deal with the third-party dependencies, or with CSS. Low-and-behold, Dart/Flutter came to my rescue. Together, this language/framework provide a powerful tool-set for developing apps that can run on Android or iOS.

At first, it was a bit difficult to wrap my head around the language paradigms, especially which widget to use when, and how concurrency is managed. But soon I learned the difference between a Container widget and a Column widget, and it was all up-hill from there. The learning experience was made all the more enjoyable by Flutter’s hot-reload feature, that allows me to see changes to my app in near real-time.

Next thing I knew, I had a working prototype for my ShuffleShelf project. Normally it would have taken me much longer to make such a smooth, interactive, and reactive app, but with Flutter it was easy. Of course, being just a prototype there is still a lot of work to do, but it’s coming along. In fact, I like the project so much, I’ve decided to submit it for consideration to the Android Developer Challenge. No idea if I’ll be chosen as one of the 10 winners, but it has been a fun excuse to document my work in any case.

I do hope I win the challenge though as it comes with help from Google Engineers specializing in Machine Learning. This would enable me to add bulk uploading of several books at once to my app, and provide a major point of differentiation with different book tracking apps. For now, I’ll just keep my fingers crossed and keep checking off my calendar until December 15th.

Please take a look at the source code, and file issues if you notice any major bugs or typos, or if you simply would like to request additional features or details.