Recent Posts

Tech Newsletter: Issue 7

The latest technology news from the past week (January 7-14th).  Enjoy. 🤖

🔗 Links
Deep Learning at GILT: machine learning, computer vision and fashion recommendation at scale http://gi.lt/2jbEMy5

Deep learning 2016 the year in review http://bit.ly/2jbGzTE

The Google Brain team — Looking Back on 2016 http://bit.ly/2j7KvoV

“OK Facebook”—Why stop at assistants? Facebook has grander ambitions for modern AI http://bit.ly/2jid5q2

Packers win Super Bowl LI? One A.I. platform using “Swart Intelligence” believes so http://es.pn/2ioa6Nk

The Periodic Table of AI  http://bit.ly/2jClQZt

World’s largest hedge fund to replace managers with artificial intelligence http://bit.ly/2jClylh

Until next time,
Steve

Tech Newsletter: Issue 6

The latest AI, machine learning, and bot news from the past week (January 1-7th).  Enjoy. 🤖

🔗 Links
You Know Your Product Team is Failing — Do You Know Why? http://bit.ly/2iKouhC

Six Commandments for Writing Good Code http://bit.ly/2hY80Dy

Three challenges you’re going to face when building a chatbot http://bit.ly/2iewnJ9

LSTM Neural Network for Time Series Prediction http://bit.ly/2jc0fdc

TensorKart: self-driving MarioKart with TensorFlow http://bit.ly/2hStIXN

Deep Learning for Video Classification and Captioning http://bit.ly/2i6vEtI

Continuous online video classification with TensorFlow, Inception and a Raspberry Pi http://bit.ly/2izE2mk

Building an Event-Based Analytics Pipeline for Amazon Game Studios’ Breakaway http://amzn.to/2ixQF0L

Rewriting TensorFlow Graphs with the GTT http://bit.ly/2ioW51a

Machine Learning Crash Course: Part 2 http://bit.ly/2hvThy1

Until next time,
Steve

Tech Newsletter: Issue 5

The latest AI, machine learning, and bot news from the past (two) weeks (December 20-31st).  Enjoy. 🤖

📒 News & Notes
Happy New Year!  2017 looks to be primed to be a big year for assistants & bots (with AI sprinkled throughout).  This year I’m planning on diving in a bit deeper on the bot building front and hope to create my first “real” bot/assistant for our clients in Q1.  

Our house was invaded by Echo’s and Google Home’s over the holidays and as I see our family adjust to these ever present assistants I can’t help but think about what the future will look like as they become more pervasive in people’s homes and offices.  I’m looking forward to diving into the various options for natural language understanding and bot building and will be sure to share interesting links I come across.  What are you most interested in exploring in 2017?

Now, onto the links for the past couple of weeks……

🔗 Links
Session-based Recommendations with Recurrent Neural Networks http://bit.ly/2i9qwFi

So you are interested in deep learning http://bit.ly/2hR6hg5

Artificial intelligence is going to make it easier than ever to fake images and video http://bit.ly/2ileCII

rasa NLU: Open-source bot tool for natural language understanding http://bit.ly/2gOyd1T

Voice Is the Next Big Platform, and Alexa Will Own It http://bit.ly/2gVTF5j

WebRTC: The future of games? http://bit.ly/2i5DfeX

Cheers,
Steve

Tech Newsletter: Issue 4

The latest AI, machine learning, and bot news from the past week (December 12-19th).  Enjoy. Subscribe here 🚀🤖

🔗 Links 
“Building Jarvis” – Mark Zuckerberg on how he built his home AI agent  Link

Move over Amazon Echo & Google Home: here comes Microsoft Cortana Link

Google Assistant APIs are here!  Link

Own ChatBot, based on recurrent neural network. Link

8 ways to build a better business bot Link

7,500 Faceless Coders Paid in Bitcoin Built a Hedge Fund’s Brain Link

📚 Books
Tools of Titans: The Tactics, Routines, and Habits of Billionaires, Icons, and World-Class Performers

Tech Newsletter: Issue 3

The latest AI, machine learning, and bot news from the past week.  Enjoy. 🤖 Subscribe here 🚀.

🔗 Links 

How to detect image contents from Ruby with Amazon Rekognition http://bit.ly/2hk427q

Mobile is eating the world http://bit.ly/2hEWUPg

Inside the secret meeting where Apple revealed the state of its AI research http://bit.ly/2gec1Om

Overcoming Bias : This AI Boom Will Also Bust http://bit.ly/2h8s2Xe

Bots That Can Talk Will Help Us Get More Value from Analytics http://bit.ly/2gm8rTm

Richard Socher on the future of deep learning http://oreil.ly/2gsbWKZ

This is how Chatbots will Kill 99% of Apps http://bit.ly/2gEdJvR

🎧 Podcasts 

The O’Reilly Bots Podcast covers advances in conversational user interfaces, artificial intelligence, and messaging that are revolutionizing the way we interact with software. https://www.oreilly.com/topics/oreilly-bots-podcast

Until next week,
Steve

Tech Newsletter: Issue 2

Welcome to the second issue of my newsletter.  While I’m not restricting the content I include in the newsletter to a single topic I have been focused on AI, machine learning, and bots recently so you can expect those themes to be a bit more prevalent.  What topics are you most interested in?

🔗 Links 
OpenAI: Universe – “We’re releasing Universe, a software platform for measuring and training an AI’s general intelligence across the world’s supply of games, websites and other applications.” – https://openai.com/blog/universe/

Artificial Intelligence, Revealed – A Series of Explainer Videos from Facebook AI http://bit.ly/2g0xygQ

Alexa, Tell Me Where You’re Going Next http://bit.ly/2gaUolK

Botkit, now with love from Microsoft and IBM Watson http://bit.ly/2g36Wcu

How to reward skilled coders with something other than people management http://bit.ly/2gVvWGb

Derek Sivers: Tilting my mirror (motivation is delicate) http://bit.ly/2g9p5ba

Jobs-To-Be-Done: The Product Marketing Framework Intercom Used To Reach $50M in ARR http://bit.ly/2gOYCRP

📚 Books 
I’ve been slacking in my reading recently.  After I came across “The Need to Read” I decided I was going to get back into the habit of regularly reading.  As such I picked up Designing Your Life: How to Build a Well-Lived, Joyful Life.  I’m only a few chapters in but I’m really enjoying it thus far.

🎧 Podcasts 
I’ve really been enjoying How I Built This from NPR.

That’s it for this week.  Until next time.
~ Steve

Tech Newsletter: Issue 1

Welcome to the first edition of my Tech/Dev newsletter.   Each week I track the interesting tech and developer news in AI, machine learning, bots, Node, Go, Ruby, Elixir, and more and share them with my subscribers.   Subscribe at: http://steveeichert.com/newsletter

Interesting Links 🔗

Building and Motivating Engineering Teams http://bit.ly/2f6Oi73

A Primer on Neural Network Models for Natural Language Processing http://bit.ly/2fwW86o

Making the Switch from Node.js to Golang http://di.gg/2g90Bxv

Eve: Programming designed for humans http://bit.ly/2ggue39

Web development using the Elm programming language http://bit.ly/2fICCpS

Dockerizing MySQL at Uber Engineering http://ubr.to/2go6GVU

Rate Limiting the right way http://bit.ly/2fIusOi

Why Native Apps Really are Doomed: Parts 1 & 2 – Part 1: http://bit.ly/2fI6b7I Part 2: http://bit.ly/2fW3m6H

Favorite New App 📱
I’ve been enjoying 🐳 Whale: Q&A

Favorite New Instagrammer 📸
Nicholas Steinberg: Link

Facebook API, I don’t like you

During the development of Reader for Facebook I ran into all kinds of issues with the Facebook API. I started with the goal of building a relatively simple application that would allow users to login to Facebook, retrieve their newsfeed, and have it read back to them using a Text to Speech engine.

At the time I started the app AVSpeeachSynthesizer wasn’t available within the iOS SDK so I ran into the wall trying to get Flite and OpenEars to work. It didn’t help that I was trying to do so in MonoTouch (it wasn’t yet Xamarin). I eventually got it working only to swap it all out when the new iOS APIs were released to do so natively. Little did I know the pain was just starting.

As I developed Reader, I worked my way through the Facebook API and identified that the combination of a Graph API call to me/home and the API’s for posting comments and likes would be all I’d need. With the API documentation in hand I set on my way to build the core Reader Facebook interactions.

Along the way I ran into lots of unexpected twists and turns that resulted in me spending way more time then I wanted combing through Stack Overflow posts trying to make sense of the Facebook API and the results I was seeing (which often didn’t line up with my expectations).

Low Res Images

For some reason the Facebook API often returns really low resolution images for photo posts. The resolution is so low that showing anything more then a tiny thumbnail isn’t possible. In order to work around the low resolution images returned from the main me/home graph API call, I’m making a second batch request for all the “photo” posts to retrieve all the available images and choosing an appropriate sized image for display. Unfortunately, for a subset of photo posts the graph API request to retrieve the images fails without any meaningful error message. My best guess is that there is some sort of permission on these posts that prevent the list of images available to be retrieved through the API. Regardless, the result is an occasional low resolution photo being displayed in Reader.

Newsfeed Filters Don’t Work

Somewhat recently I wanted to allow users of Reader to choose what filter to use when retrieving posts from Facebook. Although the default “newsfeed” filter is likely to be the most popular, it would be nice to allow folks to filter who they hear in Reader for Facebook by selecting from available filters (Friend Lists, Apps, etc.). Unfortunately, the Facebook API in many cases doesn’t pay any attention to the filter being passed in the me/home?filter=XXXX graph API request.

Random API Errors with Unexpected Side Effects

On the day Reader for Facebook was approved my wife rushed to the app store to download a “real” copy (she had a beta build on her phone). An hour or two later she called asking for a refund. It turns out that she was posting comments through the app that were returning errors, so she tried several times each time getting the same error. Despite the error, her comment was posted to Facebook multiple times. For some reason the Facebook API will return an error for certain calls but the action requested via the request will go through which makes it difficult for an app developer to appropriately handle. An API call that returns an error shouldn’t have any side effects.

Newsfeed is different via the API

In addition to the above “errors” API developers also have to deal with the fact that the results from the Facebook API don’t match up with what users see on Facebook. In Reader, I’m requesting the users home newsfeed, however, the items returned often times don’t match up with what is available in the newsfeed from within the Facebook app (or website). I’m still not sure what causes this, but it definitely leads to unexpected inconsistencies that users notice.

That’s a small sample of the Facebook API issues I ran into while developing Reader for Facebook. Based on my experiences I’m not sure I’d build anything substantial via the Facebook API. The API is too inconsistent and unreliable. I still believe leveraging the Facebook SDK and API to create social “hooks” for your app is worth considering but for more substantial integrations the pain isn’t worth the “payoff”.

Reader for Facebook

Today Reader for Facebook was approved by Apple and made available in the App Store. You should totally download it!

Background

I started working on the app that would eventually become Reader for Facebook over two years ago.  I started the project hoping to learn a bit about iOS development and figured I’d start with a small simple app that I could code and release in relatively short order.  Here I am two years later 🙂

Reader connects to your Facebook timeline and reads your timeline to you using Text to Speech.  Its a nice way to catch up on what’s happening on Facebook when you’re busy driving, walking, running, or just relaxing and can’t physically look at your phone.  On my drive to and from work I’ll start up Reader and catch up on what my friends and family have been up to during the day.

While the intention was never to create a full fledged Facebook client the app evolved a bit to support basic functionality for liking posts, adding comments, and had to be improved to support displaying various types of content that people post (images, videos, links).

While the development was relatively straight forward I ended up taking several extended breaks from development of the app due to frustrations with the Facebook API.  While the API is very rich and provides a lot of functionality it doesn’t always behave as you would expect and has a tendency to stop working or return random errors that are difficult to track down.  In fact, the version of the app released today has a bug when comments are posted related to one of these API bugs.  Despite the Facebook API returning an error messaging saying the request to create the comment failed it does indeed get posted.  After downloading the app my wife successfully posted 4 of the exact same comments due to this bug and called me asking for a refund! (not really but it sucks for your wife to find bugs in your app, after all I try to pretend like I know what I’m doing around these parts).

In addition to various breaks due to frustrations with the Facebook API and just general unavailability the project lost some steam early on when one of the Facebook hackathons was aired on TV and one of the projects was more or less to create Reader for Facebook.  With any development project, but especially side projects, its important to have motivation to continue to push forward and I lost the motivation to finish Reader shortly after.

Luckily my wife was there to encourage me and ask me if I had released the app.  I had to keep explaining it was nowwhere near complete and I had about 100 other features I thought it needed.  Eventually I decided I needed to simplify and focus on the core functionality of reading a users timeline to get the app shipped.  After all, the motivation for the project was to learn a bit about iOS development, learn the process of submitting an app, and getting it available in the App Store.  While the app still lacks many features I had planned it does achieve the primary purpose and is available in the store (YAY!).  If all the users (my mom, wife, and other family) like the app perhaps some of the other features I had thought about will make it into an updated release.

Technical Notes

The app was built with Xamarin (which was called MonoTouch and was part of Novell at the time I started the app!). Over time the tooling and overall experience of working with Xamarin improved by leaps and bounds. Xamarin Studio is a formidable, feature rich IDE, that makes developing iOS (and Android) apps a pleasant experience. I found the biggest hurdle to be finding compatible bindings for existing Objective C libraries that I wanted to leverage in the app. Over time, and with the introduction of the component store this improved dramatically…but early on it was a bit rough.

At the time I started the project C# was the only supported language and given my familiarity with the language it was a natural choice for my first app. It allowed me to focus on learning the core concepts and paradigms of iOS development rather then the language itself. I still have a bit of an aversion to Objective C and am glad that I’ll have the option of forgoing learning it in depth now that Swift and RubyMotion are options for projects I decide to attack with a non-Xamarin tool belt.

One of the big headaches early on was finding a text to speech API that worked reliably, and supported the MonoTouch toolchain. I started with Flite, moved to OpenEars, and thankfully with the final version was able to use the built in Speech Synthesizer classes available in the iOS SDK. Voice recognition is still a pain point and has prevented me from adding the ability to interact with the app via voice commands, so hopefully that API will be opened up soon as well. In the meantime I’m banging my head against the OpenEars SDK trying to get “like”, “comment”, “next”, “refresh”, and etc to be supported in Reader. Not to mention the ability to record full fledged comments or posts which is near impossible without shelling out $ for commercial speech recognition libraries.

The only other notable SDK that Reader makes use of is the Facebook SDK. Again, early in the process there were headaches with trying to get a MonoTouch compatible version of the iOS SDK but more recently the Xamarin Component Store has made the process silky smooth. The SDK and API itself is still buggy as all get out but I’m sure that will all be ironed out with Facebook’s recent commitment to API reliability (yeah right!).

Design Notes

At the very early stages of development I engaged with a designer on what at the time was called “TalkingFaces”(seriously, how terrible is that name!). I worked with him on a few iterations of a design but never ended up with anything I liked. Rather then stress too much on the design I pressed on with development hoping that as I made more progress I’d find some design inspiration and simply skin the final app and make it look super perty. As you can tell by the final version, it never did get super perty, but I hope its also not terrible.

I designed the app, icons, and screenshots myself. I had many fights with Photoshop and other tools which was another reason this project took me two years to complete. Somewhat recently I downloaded Sketch and got comfortable enough with it to complete the design items necessary to submit an app (icon, etc.). I’m still no designer but I’m looking forward to learning more about Sketch and trying to improve my design skills. The current app needs a design refresh to add some depth and separation to the UI but I’ll tackle that in an update, shipping was my goal and the current UI was “good enough” to ship.

Conclusions

It’s a relief to FINALLY move this project into the shipped category. I have a lot of ideas for improving the app but in the short term I’m going to take a little break and think about other side projects I want to move along and get closer to shipping.

If you have any feedback, comments, or suggestions for the app itself hit me up on twitter at @steveeichert, or head over to the Facebook page for Reader and give it a “Like”.

Oh, and don’t forget to download the app and recommend it to all your friends!  Download Reader

Playtime with RubyMotion and ShowKit

I’ve been spending time recently with RubyMotion. Since I’ve been working in Ruby for the last several years I thought I’d give the Ruby flavored iPhone development approach a go. At this point I’ve only spent a couple days hacking around but thus far I’ve enjoyed the process.

To guide my “playtime” with RubyMotion I decided to take a look at the ShowKit SDK and try to port one of their examples to RubyMotion. The most obvious candidate was their Conference demo, which allows two ShowKit users to video conference. I downloaded the XCode demo code from ShowKit’s github repo and pretty quickly worked my way through migrating the code to RubyMotion. While their are some very minor differences from Ruby to support the Objective-C heritage of the runtime, I felt right at home. Within a couple hours I had a mostly functional app running. As with any experiment it was fun to see it up and running on iOS devices, and being able to video conference with my wife and daughter in the next room was really….well…pointless but fun.

While this small experiment didn’t require much work I did use BubbleWrap and motion-layout for the project. After exploring the BubbleWrap API I can see lots of useful bits that I’m sure come in handy when building “real” apps. The first version of my UI didn’t handle device rotation but within a few hours of playing with motion-layout, and eventually getting my head around some of the more obscure, less documented features of the ASCII based layout language I had a UI that responded appropriately to both device orientations. Overall my UI is pretty ugly but it is functional. I’d like to spend some time trying to make a nice stylized UI so I may do that with this demo project, or perhaps start another project and explore that part of iOS development (with RubyMotion). I didn’t mess with interface builder at all for this as I’ve heard from other iOS devs that they tend to self draw their UI’s and I’ve always done the same so figured I’d go that route, especially given the super simple UI.

All and all my RubyMotion experience was positive. I need to spend some time on a greenfield app rather then a port, and do something a little more involved to get a true feel for the toolchain but I can see why folks enjoy working with RubyMotion. I do find myself asking myself (yes I like to talk to myself often) whether taking the plunge into Objective-C and XCode would be a better use of my time and I’m still a bit torn. I’ve also used Xamarin in the past and their cross platform story combined with a language I like (C#) make me feel like I need to purchase the Paradox of Choice.

On the ShowKit front, I found the SDK to work as advertised.  My app was more or less a line by line port of the Objective-C version of their demo. While I didn’t spend any time on the more difficult elements of building a real app with ShowKit I did find the documentation sufficient and the SDK to work as I expected. I have a few ideas for how I might leverage it in real projects ™ in the future so as those ideas become more concrete I’ll look to share more experiences.

Github Repo: https://github.com/eichert12/MotionConference