I'll start this post with a disclaimer. This project has not produced conclusive results. But I'm optimistic! In fact, I've got a couple hurdles that maybe some of the people reading this can help with (which is what I'm hoping for).
So let me describe what I have been working on. This is a python tool with two modes of operation. The first mode of operation looks at a single reddit post, and gathers statistics about the first 5 users to comment. This includes the user karma (average), account age (average), number of interactions on the subreddit (average), and number of interactions with other commenters in the thread (max, avg max, and average).
May 28, 2017
March 25, 2017
A Visit to eSight
After working on the All-Seeing Pi, a series of events unfolded such that Dan and I were put in contact with a company called eSight. The founder of eSight had the same inspiration as the All-Seeing Pi many years ago, and has now developed the third generation of a vision assist platform.
So we were invited to come check out their technology, and see how it stacks up to our contraption. For starters, it is a very sleek and lightweight design that looks a little more stylish. It also has some great vision enhancement functionality like contrast boosting, magnification, and screenshots. Using these features, Dan was able to read an eye chart at the "20-30" level, which is better than he was able to see before he lost his vision!
This hasn't stopped our interest in the All-Seeing Pi though, as eSight comes with a pretty hefty price tag of $10k. An existing commercial solution is never a good reason to stop a DIY project too!
March 01, 2017
The All-Seeing Pi
This post is a about vision enhancement platform called The All-Seeing Pi that I have been working on with my friend Dan, who is blind. People who are blind rarely have no vision at all though, and in Dan's case, he still has a little bit of sight in one eye. He's also the first to tell you how much technology can do to enable mobility.
From these discussions, we came up with the idea for a video feed connected to a display, with a wearable screen in the ideal spot for maximum vision. This allows someone to focus on just the screen, and let the camera capture the detail and depth of the environment.
In the end, the prototype served as a successful proof of concept. Checkout the video above for a field test and some more discussion! Dan also likes to push the limits of what can be done with his disability, which he chronicles at his blog Three Points of Contact.
In the rest of this post, I'll be talking about how to build the device. This may be useful if you or a friend have a similar condition, but it is also a great starting platform for a Raspberry Pi based augmented reality rig. The general setup is a raspberry pi with a camera module running on an HDMI (not SPI!) small display. The video feed is provided via OpenCV and RaspiCam, with code and install details below.
Labels:
3d printing,
AR,
blind,
C++,
digital eyesight,
disability,
DIY,
eyesight enhancement,
fashion,
hardware,
opencv,
picamera,
raspberry pi,
raspicam,
software,
travel,
vision assist,
VR,
wearable
February 17, 2017
Video Tutorial: Astrophotography Barn Door Tracker
This is simply a video overview of my previous post on astrophotography barn door trackers. After lots of questions about how to use it, I thought a video might come in handy!
February 11, 2017
N-Body Orbit Simulation with Runge-Kutta
In a previous post I introduced a simple orbital simulation program written in python. In that post we simulated orbits by simply taking the location, and velocities of a set of masses, and computed the force on each body. Then we calculated where all the bodies would end up under that force a small time step into the future. This process was repeated over and over again, and was used to simulate gravitational systems like our solar system giving outputs like you see below.
This technique is called the Euler method. If you're not familiar with using numerical methods to simulate orbits, I'd recommend taking a look there first!
Part 1: Python N Body Simulation
In this post I will be adding a more advanced time stepping technique called the Fourth Order Runge-Kutta method. Kids these days just call it RK4. I'll walk through the logic behind RK4, and share a python implementation. I will also link to a C++ implementation, and do a brief performance comparison.
This technique is called the Euler method. If you're not familiar with using numerical methods to simulate orbits, I'd recommend taking a look there first!
Part 1: Python N Body Simulation
Orbit paths from the previous example |
In this post I will be adding a more advanced time stepping technique called the Fourth Order Runge-Kutta method. Kids these days just call it RK4. I'll walk through the logic behind RK4, and share a python implementation. I will also link to a C++ implementation, and do a brief performance comparison.
Fourth Order Runge-Kutta
January 25, 2017
Launch a Script Using Alexa Voice Commands
In a previous post, I showed how you can build a smart mirror with an Alexa voice assistant on board. The MagicMirror software and Alexa voice assistant were both hosted on a Raspberry Pi, but unfortunately there was no obvious way to get Alexa to control the smart mirror, or deliver commands to the Raspberry Pi.
I have now found a solution that is free, reliable, and very flexible. This is done by writing an Alexa Skill that adds a message to a cloud hosted queue based on your voice command. The Raspberry Pi repeatedly checks this queue for new messages, and runs customizable behaviour based on message contents. This is not limited to smart mirror applications, or Raspberry Pis. It can be used to launch any script you want on any platform that will connect to Amazon's SQS.
Here is a demonstration, and high level overview of how it works:
and a follow up demonstrating an extension of this idea:
In this tutorial I will focus on just using this to simply turn the smart mirror on and off. Adding your own scripts should then be fairly straight forward,
The steps will be as follows:
I have now found a solution that is free, reliable, and very flexible. This is done by writing an Alexa Skill that adds a message to a cloud hosted queue based on your voice command. The Raspberry Pi repeatedly checks this queue for new messages, and runs customizable behaviour based on message contents. This is not limited to smart mirror applications, or Raspberry Pis. It can be used to launch any script you want on any platform that will connect to Amazon's SQS.
Here is a demonstration, and high level overview of how it works:
and a follow up demonstrating an extension of this idea:
In this tutorial I will focus on just using this to simply turn the smart mirror on and off. Adding your own scripts should then be fairly straight forward,
The steps will be as follows:
- Create a Queue using the Amazon Simple Queue Service (SQS)
- Write some python code to read and write to this queue
- Write a Lambda Function that posts messages to the queue
- Write an Alexa Skill that calls the Lambda Function
- Schedule a task on your Raspberry Pi to read queue messages and take appropriate action.
January 11, 2017
DIY Selfie Stick
This is a fun and very easy project I came up with while trying to build a wireless shutter for an iPhone. I knew selfie sticks could trigger your camera shutter, so I was trying to find out what information they were sending through the 3.5mm aux port (aka headphone plug) to do this.
In this process I learned that headphone volume buttons will also trigger the camera shutter!
So I plugged in some headphones with volume control, and tried to come up with a way to attach my phone to a stick. I came up with the spatula sponge elastic combo you'll see here.
Take a look for yourselves!
In this process I learned that headphone volume buttons will also trigger the camera shutter!
So I plugged in some headphones with volume control, and tried to come up with a way to attach my phone to a stick. I came up with the spatula sponge elastic combo you'll see here.
Take a look for yourselves!
Labels:
DIY,
easy,
hardware,
iphone,
lifehack,
photography,
really easy,
selfie,
selfie stick
January 05, 2017
Markov Chains: The Imitation Game
In this post we're going to build a Markov Chain to generate some realistic sounding sentences impersonating a source text. This is one of my favourite computer science examples because the concept is so absurdly simple and and the payoff is large. This will be done using python, and your final code will look like this.
Before I explain how it works though, let's look at an example generated from the past four Cyber Omelette posts:
So it's not exactly coherent, but it comes pretty close! I also built a twitter bot called mayhem_bot (git repo) using this concept. This bot imitates users and trending hashtag discussions, so send it some tweets if you want more examples.
Before I explain how it works though, let's look at an example generated from the past four Cyber Omelette posts:
"The first step to attach our mirror is done by simply screw the backing with a Raspberry Pi's into the filename. That looks like installing and blinking cursor. Type the wood screws.
Gorilla glue.
Electrical tape.
Extension Cord with multiple passes of the all Together Once it hasn't really stuck in place until all directions. Clean your monitor."
So it's not exactly coherent, but it comes pretty close! I also built a twitter bot called mayhem_bot (git repo) using this concept. This bot imitates users and trending hashtag discussions, so send it some tweets if you want more examples.
Rat dreams of stealing the treat bag and NOT getting caught. #unlikelyanimaldreams #mayhem_bot— Mayhem Bot (@mayhem_bot) December 20, 2016
— Mayhem Bot (@mayhem_bot) December 9, 2016
Subscribe to:
Posts (Atom)