Liquid Cooling an ASUS GTX1070 with Using NZXT G12 GPU Cooler & Corsair H75

My buddy recently decided he wanted to watercool his Asus GTX1070. There’s only two real choices: the NZXT G12 bracket (which replace the stock cooler) and the Corsair HG10 N980. The HG10 isn’t actually made for the 10 series, but is apparently easily modified to clear the VRM caps using some power tools. The G12 claims full compatibility with 1070 series cards, so he went with that. He also picked up a Corsair H75 which is fully compatible with the G12 bracket.

After taking the stock cooler off, it was immediately clear that the VRM caps were going to interfere with the curved standoffs the G12 uses to mount to the card.

Nothing a little grinding can’t fix! With a file, some thin washers, and about 15 minutes, it’s easy to modify the G12 to fit this specific card. The main issue here is the flanged lip visible on the bracket below. It hits the top of the VRM caps.

We started by grinding the lip off using a metal file. This only took about 3 minutes; the brackets are relatively soft.

A bit of black sharpie afterward covered it up well enough (these two brackets aren’t visible once the cover is in place).

We also added some very thin washers between the bracket and the topside of the PCB to add the slight extra bit of offset needed to clear the caps. They’re no more than 0.5mm thick.

After making these two changes, everything fit just like it should. We added a thin ring of foam between the white bracket and the H75 to make up for the width the thin washers add, but I think you could get away without doing this.

That’s it! Note that when you’re putting this bracket on, with or without this mod, you should only tighten the thumbscrews until you see the card flex ever so slightly. Once you see this, back each thumbscrew off by about 1 full turn. The cooler does not need to be pressed tightly against the die; over-tightening the cooler will put unnecessary strain on the PCB and components and can lead to premature failure of the card due to thermal cycling. It’s also possible to crack or damage the GPU die.

A few more pictures of the final product:

Downloading & Saving a Nest Cam Live Stream Using a Raspberry Pi + Debian Linux

Tonight, I stumbled on an interesting post on Reddit. It linked to two Nest Cams livestreaming the landfall of hurricane Irma from a Miami condo. I popped the streams open on my phone (the hurricane had not hit yet, and the sky was mostly clear) and thought about the fact that I was probably going to fall asleep before the power went out and killed the stream.

Being an incredibly good-looking and overconfident dweeb, I then thought about the fact that there’s got to be a good way to rip and save a Nest Cam livestream so that I could watch it tomorrow, maybe post a time-lapse, and gain all of the Karma. After all, its already been done with tons of other streams from various sites. It seemed like a decent Friday night hackathon in the making, and would at least solve the problem of me falling asleep lest I fail.

I decided I wanted something that was (1) automatic, and (2) running in the background so that I didn’t need to stay up all night or keep my computer on. I ran through the list of options in my head:

  1. Just stream them in a browser window and use screen capture (lame and fails both #1 and #2),
  2. Use one of the millions of Chrome or Firefox plugins that allows for saving of streams (extra lame and still fails #2),
  3. Use some sort of stream-ripping software built for Linux so I could load it on my always-running Pi (not lame, but impossible for me to find something that worked after looking for an hour or so), or
  4. Hack it and do it myself.

If you haven’t guessed already, I went with #4. I’m going to show you how I figured it out and how to do it yourself. This assumes some basic knowledge of Linux command line and shell scripting.

First thing first, I loaded one of the Nest Cam streams using the links provided on Reddit. The video livestream itself sits inside a Nest-branded HTML page that does this really annoying thing where it auto-pauses and pops over a Nest advertisement every once in a while. If I wasn’t going to rip it out already, I would have been sufficiently annoyed by this to figure out how to get to the base stream.

I poked around inside the page source using the Safari dev tools to see if I could find any obvious stream container or link, but didn’t see anything. I did find a more minimal stream that is formatted for Twitter but it still does the popover thing. Boo. I also poked around in the javascript (warning: there’s a lot) to see if the stream was being lazy-fetched from any obvious source. Again, nothing. Boo.

I decided to use the Timelines tool to see what’s being loaded on the network. I recorded for a few seconds and saw what was clearly a periodic fetch taking place. There’s an XHR request going out approximately every 4 seconds. It’s loading a media_xxxxxxxx_123.ts file and a chunklist_xxxxxxxx.m3u8 file after each request. This is definitely an MPEG-2 stream, with the chunklist serving as a manifest for the media.ts file. Bingo!

.m3u8 files are commonly used to define video streams, and so I knew I was on the right track. Right-clicking on the m3u8 file and choosing “Copy Link Address” and pasting it into the Safari address bar yielded a base-level video stream with no extra junk (*cough*) on top of it. It looks like Nest streams their livestream content from stream-bravo.dropcam.com or from stream-delta.dropcam.com. (Both are currently using Wowza Streaming Engine 4 Subscription Edition 4.7.1 build20635)

The next step was saving the stream using this URL. Time to break out the Pi! I figured I could use ffmpeg to do this, and after a quick google search, my assumptions were confirmed. This StackOverflow question gave me what I needed, except I wanted to ensure that the ffmpeg command was always running (in the event the stream broke up and was restarted, a network issue occurred, etc).

For those of you who just want to save a Nest Cam stream to disk using Raspbian/Raspberry Pi/Debian/Other Linux, this is the command that will do it for you (you need ffmpeg installed in order to use this): ffmpeg -i http://your_stream_chunklist_link.m3u8 -c copy -bsf:a aac_adtstoasc /path/to/output/file.mp4. For example, this is the command I used to save the stream I was watching to my home directory: ffmpeg -i https://stream-delta.dropcam.com/nexus_aac/a8a645a10ef24a50b250c14a08b02ef9/chunklist_w719996219.m3u8 -c copy -bsf:a aac_adtstoasc Stream.mp4

In order to make sure that ffmpeg was always restarted in case of any issues, I whipped up the following shell script (named runStream.sh) to be run as a cronjob:

#!/bin/bash
#make-run.sh
#make sure a process is always running.

process=LivingRoom
now=$(date +%Y%m%d%H%M%S)
makerun="ffmpeg -i https://stream-delta.dropcam.com/nexus_aac/a8a645a10ef24a50b250c14a08b02ef9/chunklist_w719996219.m3u8 -c copy -bsf:a aac_adtstoasc /media/HDD/Stream_$now.mp4"

if ps ax | grep -v grep | grep $process > /dev/null
then
 exit
else
 $makerun &
fi

exit

The script checks to see if the ffmpeg command is running using ps ax and grep. If it is, there is no need to start it, so it exits. If it isn’t, the script is started using the makerun shell command. Note the $now variable at the end of the filename: it automatically appends a puncuation-less timestamp to each video file, so that the previous file is not lost when ffmpeg is automatically restarted.

The last thing to do was to make the script executable using chmod +x runStream.sh and add it to the crontab using crontab -e. I set it to run every minute (can’t miss any of the action!) using the following crontab:

# m h  dom mon dow   command
* * * * * /home/pi/runStream.sh

After saving the changes and waiting a minute, I saw the first video file pop up. After running for a few hours, the auto-restart was a great idea, because it’s kicked in several times (likely due to haphazard internet because there a HURRICANE).

Stay safe out there, Florida. It’s going to get crazy.

DIY LED Lighting for Fishtank/Aquarium Setups

I’ve got an older Marineland Eclipse 3 tank that needed a new CFL bulb (the old one was barely igniting, and the spectrum was all sorts of off). After looking into the cost for a new bulb from Mainland, I decided I would rather do a simple DIY LED upgrade than pay for the same underwhelming fluorescent bulb.

I purchased a 5M string of cool white LEDs from Amazon. They’re the 3528 size, 60 LED/meter. The important thing is that they’re the resin-coated waterproof variety. I went with the cool white for two reasons: the spectrum is better for underwater plants*, and as the resin heats and ages, it yellows slightly, making the light a bit more yellow, or warm.

I also had some of the double-sided reflective foam lying around from a previous project. Although you could do this upgrade without it, it makes for an even more light-efficient setup, as light reflected off of the water or bottom of the tank is reflected back into the tank.

The first step is to remove the previous lighting setup. The Marineland CFL bulb is held inside the hood with a few screws, so it’s easy to remove. Next, I sized out the reflective foam and glued it inside the hood using hot glue.  It’s important to make cutouts for any hinged openings!

Next, I glued the LED strip in a “folded” pattern. This isn’t as clean as actually cutting the strip and taping them truly parallel, but it keeps the waterproofing intact and really reduced the amount of work needed. Soldering this waterproof strip takes a lot more work than you would think, so don’t do it unless you really need to. I used hot glue in addition to the adhesive backing on the LEDs since the adhesive isn’t super strong.

Finally (not pictured), I covered the LEDs with a few passes of packing tape and made sure everything was stuck down nicely. This isn’t required, but is a nice bit of peace of mind for when it’s all powered on. The LED strips are supposed to be waterproof, but an extra layer of protection keeping it away from the water in case the glue fails can’t hurt.

And that’s it! It looks great when it’s powered on, emits way less heat, and uses less energy than my expensive Marineland bulbs did. Not to mention that I can reduce the hours it’s on for, since the lighting is more efficient for plant growth.

*Tech-overload sidenote: Cool white LEDs are actually not  emitting white light. They work by using a blue-emitting diode coated with a yellow/red coating (usually phosphorous-based) which, combined with the blue, looks white. Because of this, the spectrum of a “white” LED actually peaks highly in the blue (~450nm) with a broader peak in the red (~550-700 nm). It just so happens that the most important frequency for Chlorophyl A is right around 450nm, with a secondary peak around 700nm.

Analyzing the Performance of Acorns Investment Portfolios using Quantopian

I’ve been using Acorns, the app to “help anyone invest” for a little over a year, mainly as a curiosity. One common theme I’ve seen amongst Acorns users is the disappointment with the lack of returns throughout their usage. Simply searching “Acorns Returns” online brings up hundreds of posts (many of them, uncoincidentally, occurring on r/investing*). While many chalk down to unrealistic users irritated they haven’t seen their investment quadrupling in value overnight, reading these posts did make me question: what historical data in regards to returns is available for the different Acorns portfolios?

Digging around, I quickly realized the answer was: not much. Acorns itself publishes no info on the rate of returns of its various portfolios, which isn’t unexpected or surprising. Reading the accounts of other users doesn’t cut it for me, since many of them bail after getting antsy or taking their first 2% loss. There’s another camp of users who claim they’ve seen some absurd rate of return (Bro, I’ve seen 65% since May!). What about portfolio statistics? Dreaming of reading about portfolio volatility? Keep dreaming. Point being, tangible data is hard to come by.

Coincidentally, I’ve been playing around with Quantopian, an absolutely awesome python-based algorithmic investing framework and simulation/backtesting platform. Quantopian has access to stock metrics, such as price, volume, dividend/split info, etc, since January 3rd, 2002; Quantopian allows users to backtest stocks and trading algorithms using this data, in order to examine performance of various trading strategies. Somewhere along the line, I had the thought: “Why not just create each of the Acorns portfolios and backtest them to analyze their performance?”. The data provided by these backtests is generally so close to true performance for simple cases that any error is excusable.

Acorns allows users to choose a portfolio based on their risk tolerance and investing goals, all ranging from “Conservative” to “Aggressive”. The five portfolios are really just different target allocations in six ETFs: VOO, VB, VWO, VNQ, LQD, and SHY. The first four funds are Vanguard ETFs and the last two are iShares bond ETFs. The portfolio distributions are shown below, with the Aggressive portfolio being shown on top and the Conservative portfolio on the bottom.

acorns

This is basically all the info required to build these portfolios within Quantopian. Each security in the portfolios is added to the trading list, and  a rebalance function is scheduled to run every day 1 hour after the market opens. In reality, the Acorns portfolio rely on new round-ups and contributions to keep allocations at the right percentages, but this method of simulation seems close enough. I also set the backtest to only hold long positions, and made sure the trading commissions were zero (so as to only focus on the portfolio performance, and not the Acorns fee breakdown). The entire algorithm is shown below.

"""
Attempts to model the Acorns Agressive Portfolio for performance modeling.
Backtesting is limited to September 10, 2010 (first trade day of $VOO).
Acorns keeps allocations exact by purchasing fractional shares. Since this isn't generally possible, a larger initial capital must be used (>$10K).
"""

def initialize(context):
 """
 Called once at the start of the algorithm.
 """ 
 set_long_only()
 
 # Each security in the ETF along with the target percentage
 context.voo = (sid(40107), 0.14) 
 context.vb  = (sid(25899), 0.25) 
 context.vwo = (sid(27102), 0.20) 
 context.vnq = (sid(26669), 0.30) 
 context.shy = (sid(23911), 0.05) 
 context.lqd = (sid(23881), 0.06) 
 
 context.security_list = [context.voo, context.vb, context.vwo, context.vnq, context.shy, context.lqd]
 
 # Rebalance every day, 1 hour after market open.
 # In reality, the Acorns app banks on additional buys 
 # To keep the allocations correct.
 schedule_function(rebalance, date_rules.every_day(), time_rules.market_open(hours=1))
 
 # Fees are zero, if you're on the "student plan". 
 set_commission(commission.PerShare(cost=0, min_trade_cost=0))
 set_commission(commission.PerTrade(cost=0))
 
 
def rebalance(context,data):
 """
 Execute orders according to our schedule_function() timing. 
 """
 for security in context.security_list:
   #rebalance to target percentage
   if data.can_trade(security[0]):
     log.info("Rebalancing %s to %s percent" % (str(security[0]), str(security[1])))
     order_target_percent(security[0], security[1])

The only thing that changes between each of the Acorns portfolios is the target percentages. Everything else remains constant, which makes backtesting and examining the different portfolios incredibly easy. Three backtest start dates were chosen: September 10th, 2010, September 10th 2014, and October 8th 2015. Each backtest ended on the same date: October 7th 2016. The returns (%) for each backtesting window are summarized below.

screen-shot-2016-10-09-at-1-04-39-am

Upon first look, it appears that the returns are reasonable and on par for each portfolio description, and I can confirm that the returns are very close to the actual returns I’ve experienced during my usage of the app over the last year. Returns diminish fairly monotonically from aggressive to moderate as one would expect.

There’s a catch though: I didn’t include the S&P500 benchmark performance in the results.

screen-shot-2016-10-09-at-1-12-23-am

In all the backtest windows, the S&P500 destroys the Acorns portfolios. Suddenly, the 10.57% short term return doesn’t seem so great after realizing the S&P500 did almost 13 percent in the same year. Take a look at the Quantopian backtest results for the aggressive portfolio, the best-performing one of the bunch, over the last year.

screen-shot-2016-10-09-at-1-16-59-am

The portfolio almost exclusively underperforms the S&P500 in terms of returns. It’s also worth nothing that the drawdown during the January-Feburary time period of 2016 is worse than that of the S&P500 as well.

At this point, the logical thought is “Of course the drawdown sucks, because this is the Aggressive portfolio.” And you’d be partially right. Take a look at the results for the conservative portfolio during the same time period:

screen-shot-2016-10-09-at-1-21-52-am

The drawdown during the same period is hardly better than the S&P500, which fell over 3% from the starting point in the beginning of October 2015. Rather disappointing, given that the portfolio is billed as a “safer bet” option for investors. The backtest results over the last year of all five portfolios are given below, for anyone interested.

screen-shot-2016-10-09-at-1-16-59-am screen-shot-2016-10-09-at-1-40-41-am

screen-shot-2016-10-09-at-1-41-00-am

screen-shot-2016-10-09-at-1-41-20-am

screen-shot-2016-10-09-at-1-40-16-am

My final thoughts? It’s not that these portfolios are bad. They’re decent portfolios, especially for beginning investors who are looking to stash away a few pennies here and there with minimal effort and a simple fee structure. Is a 6-10% return better than holding onto a pile of cash and losing out to inflation? Absolutely! An Acorns portfolio serves as an excellent “baby’s first investment”. I just can’t help but shake the disappointment that all five portfolios have underperformed the market in most, if not all, previous financial circumstances. For anyone beyond the novice saver/pocket-change investor, it makes more sense to invest wisely in a few basic ETFs (i.e. NOBL/UPRO) and call it a day. A true zero-fee investment experience can be had by using Robinhood, potentially saving the $1 per month fee. Smart savers who appreciate the auto round-up feature of Acorns can enjoy Wealthfront‘s auto-deposit feature with the added benefit of Tax-Loss harvesting.

Do you have thoughts on my analysis of these portfolios or my backtesting strategy? Let me know in the comments. I appreciate reader input.

*r/investing deserves a post of its own. Similar to r/fitness, the “beginners pretending to be experts” culture leads to an unsurprising amount of awful advice. Trust the internet at your own risk. Trust the internet with investment guidance at your bank account’s risk.

Adding a buzzer beeper to the Illuminati32/Tarot Naze 32 Flight Controller

My buddy just bought an Illuminati32 FC board from Hobbyking for his new ZMR180 miniquad. The board is pretty sweet: Naze32, MWOSD, 35x35mm form factor, and only 20 bucks (on sale). It was really easy to set up, especially with the ZMR180 PDB that Diatone is shipping. The biggest problem is that there’s no buzzer output! A buzzer driver is one basic necessity every flight controller should come with. Low battery, lost models, and mode change beeps are pretty crucial to the operation of these miniquads. Luckily, it’s relatively simple and very cheap to add one onto this board. You’ll need:

  • High gauge (>30) wire
  • Fine solder and a fine-tipped soldering iron
  • An NPN BJT (2N3904, BC557, etc)
  • A 100Ω resistor
  • Heatshrink tubing or tape

The built-in Naze32 (rev5 or greater) buzzer driver is a NPN transistor used in an open-collector configuration. There is a base resistor on the order of 100Ω used to set the drive current. PA12, or Pin 33 of the STM32 is used to drive the BJT.

buzz

It’s possible to do this BEAM style, with SMT components and a small piece of protoboard, or even a with small PCB (perhaps from OSHPark). We chose to do it with a little PCB that’s also got an ATTiny84 on it which controls some RGB led strip.

buzz2

The hardest part of this is soldering onto the STM32, since PA12 isn’t broken out or used for anything else. See the below image to find the pin you need to solder to. Note that the text on the STM32 isn’t guaranteed to be upright, so look for the pin 1 marker!!!

stm

You’ll need some fine gauge wire (30 gauge wire wrap wire worked well), a fine-tipped soldering iron, and a steady hand.

IMG_0593

Once the solder connection is made, don’t hesitate to dump some hot glue onto the connection to keep it from being broken loose. Once you’ve got the connection to PA12, solder one end of the 100Ω resistor to the PA12 wire, and the other to the base of the BJT. Solder the positive lead of your buzzer to the flight controller’s 5V input, and the negative lead of the buzzer to the collector of the BJT. Finally, solder the emitter of the BJT to ground. Be sure to wrap everything in heatshrink or tape so that you don’t accidentally short anything out.

That’s all there is to it! Just plug in a battery without turning on the transmitter and the FC should issue the “no connection” beep if everything worked.

Eachine H8 Quadcopter Custom Firmware Rates/Settings

I recently flashed my new Eachine H8 with some custom firmware (Silver13’s CFW, to be particular) and I spent some time tuning it to be flyable in acro mode with the stock remote. Overall, I think this firmware is super cool and really strokes my roots as a hardware engineer and reverse engineer. Not to mention, it flies as good or better than the stock firmware and is tons of fun to tinker with! Here’s my config.h file, for those who might be interested in a good place to start:

//config.h, edited by jaygreco

#include "defines.h"

// rate pids in pid.c
// angle pids in apid.h ( they control the rate pids)
// yaw is the same for both modes

// not including the "f" after float numbers will give a warning
// it will still work

// rate in deg/sec
// for low rates ( acro mode)
#define MAX_RATE 180.0f
#define MAX_RATEYAW 200.0f

// multiplier for high rates
// devo/module uses high rates only
#define HIRATEMULTI 3.0f
#define HIRATEMULTIYAW 4.0f

// max angle for level mode (in degrees)
// low and high rates(angle?)
#define MAX_ANGLE_LO 35.0f
#define MAX_ANGLE_HI 55.0f

// max rate for rate pid in level mode
// this should usually not change unless faster / slower response is desired.
#define LEVEL_MAX_RATE_LO 360.0f
#define LEVEL_MAX_RATE_HI 360.0f

// disable inbuilt expo functions
//#define DISABLE_EXPO

// use if your tx has no expo function
// also comment out DISABLE_EXPO to use
// -1 to 1 , 0 = no exp
// positive = less sensitive near center 
#define EXPO_XY 0.6f
#define EXPO_YAW 0.25f


// Hardware gyro LPF filter frequency
// gyro filter 0 = 260hz
// gyro filter 1 = 184hz
// gyro filter 2 = 94hz
// gyro filter 3 = 42hz
// 4 , 5, 6
#define GYRO_LOW_PASS_FILTER 3

// software gyro lpf ( iir )
// set only one below
//#define SOFT_LPF_1ST_023HZ
//#define SOFT_LPF_1ST_043HZ
//#define SOFT_LPF_1ST_100HZ
//#define SOFT_LPF_2ND_043HZ
//#define SOFT_LPF_2ND_088HZ
//#define SOFT_LPF_4TH_088HZ
//#define SOFT_LPF_4TH_160HZ
//#define SOFT_LPF_4TH_250HZ
#define SOFT_LPF_NONE

// this works only on newer boards (non mpu-6050)
// on older boards the hw gyro setting controls the acc as well
#define ACC_LOW_PASS_FILTER 5


// Headless mode
// Only in acro mode
// 0 - flip 
// 1 - expert
// 2 - headfree
// 3 - headingreturn
// 4 - AUX1 ( gestures <<v and >>v)
// 5 - AUX2+ ( none )
// 6 - Pitch trims
// 7 - Roll trims
// 8 - Throttle trims
// 9 - Yaw trims
// 10 - on always
// 11 - off always
// CH_ON , CH_OFF , CH_FLIP , CH_EXPERT
// CH_HEADFREE , CH_RTH , CH_AUX1 , CH_AUX2 , CH_AUX3 , CH_AUX4
// CH_PIT_TRIM, CH_RLL_TRIM, CH_THR_TRIM, CH_YAW_TRIM
#define HEADLESSMODE CH_OFF


// rates / expert mode
// 0 - flip 
// 1 - expert
// 2 - headfree
// 3 - headingreturn
// 4 - AUX1 ( gestures <<v and >>v)
// 5 - AUX2+ ( none )
// 6 - Pitch trims
// 7 - Roll trims
// 8 - Throttle trims
// 9 - Yaw trims
// 10 - on always
// 11 - off always
// CH_ON , CH_OFF , CH_FLIP , CH_EXPERT
// CH_HEADFREE , CH_RTH , CH_AUX1 , CH_AUX2 , CH_AUX3 , CH_AUX4
// CH_PIT_TRIM, CH_RLL_TRIM
#define RATES 1


// level / acro mode switch
// CH_AUX1 = gestures
// 0 - flip 
// 1 - expert
// 2 - headfree
// 3 - headingreturn
// 4 - AUX1 ( gestures <<v and >>v)
// 5 - AUX2+ ( none )
// 6 - Pitch trims
// 7 - Roll trims
// 8 - Throttle trims
// 9 - Yaw trims
// 10 - on always
// 11 - off always
// CH_ON , CH_OFF , CH_FLIP , CH_EXPERT
// CH_HEADFREE , CH_RTH , CH_AUX1 , CH_AUX2 , CH_AUX3 , CH_AUX4
// CH_PIT_TRIM, CH_RLL_TRIM
#define LEVELMODE CH_AUX1

// channel to initiate automatic flip
#define STARTFLIP CH_FLIP

// aux1 channel starts on if this is defined, otherwise off.
#define AUX1_START_ON

// use yaw/pitch instead of roll/pitch for gestures
//#define GESTURES_USE_YAW

// comment out if not using ( disables trim as channels, will still work with stock tx except that feature )
#define USE_STOCK_TX

// automatically remove center bias ( needs throttle off for 1 second )
#define STOCK_TX_AUTOCENTER

// throttle angle compensation in level mode
// comment out to disable
#define AUTO_THROTTLE

// enable auto throttle in acro mode if enabled above
// should be used if no flipping is performed
// 0 / 1 ( off / on )
#define AUTO_THROTTLE_ACRO_MODE 0


// enable auto lower throttle near max throttle to keep control
// comment out to disable
//#define MIX_LOWER_THROTTLE

// options for mix throttle lowering if enabled
// 0 - 100 range ( 100 = full reduction / 0 = no reduction )
#define MIX_THROTTLE_REDUCTION_PERCENT 100
// lpf (exponential) shape if on, othewise linear
//#define MIX_THROTTLE_FILTER_LPF

// battery saver ( only at powerup )
// does not start software if battery is too low
// flashes 2 times repeatedly at startup
#define STOP_LOWBATTERY

// under this voltage the software will not start 
// if STOP_LOWBATTERY is defined above
#define STOP_LOWBATTERY_TRESH 3.3f

// voltage too start warning
// volts
#define VBATTLOW 3.5f

// compensation for battery voltage vs throttle drop
// increase if battery low comes on at max throttle
// decrease if battery low warning goes away at high throttle
// in volts
#define VDROP_FACTOR 0.60f

// voltage hysteresys
// in volts
#define HYST 0.10f


// enable motor filter
// hanning 3 sample fir filter
#define MOTOR_FILTER


// clip feedforward attempts to resolve issues that occur near full throttle
//#define CLIP_FF

// motor transient correction applied to throttle stick
//#define THROTTLE_TRANSIENT_COMPENSATION

// motor curve to use
// the pwm frequency has to be set independently
#define MOTOR_CURVE_NONE
//#define MOTOR_CURVE_6MM_490HZ
//#define MOTOR_CURVE_85MM_8KHZ
//#define MOTOR_CURVE_85MM_32KHZ

// pwm frequency for motor control
// a higher frequency makes the motors more linear
//#define PWM_490HZ
//#define PWM_8KHZ
#define PWM_16KHZ
//#define PWM_24KHZ
//#define PWM_32KHZ

// failsafe time in uS
#define FAILSAFETIME 1000000 // one second


// level mode "manual" trims ( in degrees)
// pitch positive forward
// roll positive right
#define TRIM_PITCH 0.0f
#define TRIM_ROLL 1.0f


// ########################################
// things that are experimental / old / etc
// do not change things below

// invert yaw pid for hubsan motors
//#define INVERT_YAW_PID

//some debug stuff
//#define DEBUG

// disable motors for testing
//#define NOMOTORS

// enable serial out on back-left LED
//#define SERIAL


// enable motors if pitch / roll controls off center (at zero throttle)
// possible values: 0 / 1
#define ENABLESTIX 0

// only for compilers other than gcc
#ifndef __GNUC__

#pragma diag_warning 1035 , 177 , 4017

#pragma diag_error 260

#endif
// --fpmode=fast ON

Eagle Keyboard Commands for Showing and Hiding Layers

Here’s a short post but one that I think a few people might find useful: CadSoft EAGLE keyboard commands/shortcuts (strangely found under the “Assign” menu in Eagle) to show and hide the Top and Bottom layers independently. I’ve also got one to show all layers, top and bottom. Note that you can change any key a command is bound to when you set up the “assignment” in Eagle.

To set a shortcut in Eagle, choose Options → Assign…. and press the New button.

Screen Shot 2016-04-05 at 8.54.04 PM +  Screen Shot 2016-04-05 at 8.56.32 PM

Once in the menu, choose the key you want the assignment to bind to, along with any modifier keys (Alt, Shift, etc). I chose Alt+0 for showing the bottom layer only, Alt+1 for the top layer only, and F12 for all top and bottom layers.

Screen Shot 2016-04-05 at 8.55.16 PM

Here’s the three Eagle commands for Top, Bottom, and All layers:

  • Top only: display none; display 1 17 18 19 20 21 23 25 27 29 39 41 45 51;
  • Bottom only: display none; display 16 17 18 19 20 22 24 26 28 30 40 42 45 52;
  • All (Top and Bottom): display none; display 1 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 39 40 41 42 45 51 52;

A bit time consuming to implement, but they’ll save you tons of time once you’re actually doing a layout.

Reverse Engineering the Amazon Dash Button’s Wireless Audio Configuration

Update: I’ve learned a bit more and as such, see the bottom of the post for an update. I’ve been slow to update this but now that it’s seeming to be gaining a lot of traffic (leading up to 33c3…hmmmm ;)), I want the info to be accurate and fresh.

During the great Amazon Dash Button Hype of 2015, I saw a few of the early teardowns and blog posts and decided to order a few dash buttons of my own to play around with and reverse engineer. Since the hype has burned off, there hasn’t been much in the way of new information about the inner workings of the button.

dash08-pcb
Photo credit: Matthew Petroff

The Amazon Dash button is a neat little IOT device which contains an STM32F205 ARM Cortex M3 microcontroller, a Broadcom BCM43362 Wi-Fi module, a permanently attached (boo!) Energizer Lithium AAA battery, an Invensense I2S digital microphone, some serial flash, and assorted LEDs and SMPS power supplies. For the $5 price tag, the Dash Button packs some serious punch! Just the components are worth a considerable amount more than $5.

Playing around with the button, the setup process on iOS quickly caught my attention. It (apparently*) differs from the Android setup process considerably due to differences in the inner workings of iOS. The Android setup involves connecting to the button via a network called “Amazon ConfigureMe”, while the iOS app appears to use ultrasound-esque audio to transfer information to the button for the initial setup.

*I don’t actually have an Android device on hand to test this with, hence the “apparently”.

Without even opening the button, I put together a basic theory on how the button was setup from the iOS app: The app sends a carefully crafted “audio” packet using the iOS CoreAudio Framework, which is then picked up by the Dash Button’s onboard mic and parsed for Wi-Fi config info. If the Wi-Fi credentials are correct, the button phones home to the Amazon configuration servers and the setup continues, but with further config info being sent directly to the button over the Wi-Fi.

I immediately ripped apart the button in search of a way to piggyback on the ADMP441 digital microphone’s I2S bus. I figured it would be trivial to toss a logic analyzer on the bus and decode what I2S data was being sent to the STM32. Since I2S is a very commonly used and extremely well documented audio protocol, I counted on this being a relatively quick task.

While I was impressed with the density of the design, I was most definitely not impressed with the lack of a visible testpoint on the board for the digital microphone’s data line. The EN (enable), SCK (clock), and WS (word select) lines are easily available, but the SD (data) line is nowhere to be found. I poked around for a bit but didn’t see anything that looked promising. I quickly came to the realization that I was probably going to have to analyze the audio protocol as it came out of my iPhone rather than sniff it on the board. This was about the same time that I also realized this was not going to be the quick and dirty analysis I was expecting…

Armed with my RØDE shotgun mic, I took a new approach. Using  Electroacoustics Toolbox, I performed some basic audio analysis on the packets coming from the Amazon iOS app. Based on Matthew Petroff’s Dash Button Teardown, I initially expected some sort of Frequency-Shift Encoded (FSK) modulation scheme. Using the Spectrogram tool, I could see that the configuration data was definitely coming in bursts of 20 packets in a try-retry scheme. It also looked like the frequency of the audio was spread out between 18kHz and 20kHz, which is on par for an audio FSK implementation.

Screen Shot 2015-12-23 at 1.21.00 PM
Spectrogram capture of an entire configuration transmission.

Things got interesting, however, when I took an FFT of an entire transmission. The FFT showed an obvious frequency spread near 19kHz, but lacked the characteristic “double peak” indicating frequency occurrences at both the mark and space frequencies.

FFT of entire configuration transmission.
FFT of entire configuration transmission.
FFT of FSK modulation. Note the very obvious "double peak".
FFT of FSK modulated data. Note the very obvious “double peak” at the mark and space frequencies.

As I examined the FFT, it became clearer and clearer that the configuration data was not being transmitted with an FSK modulation scheme. At this point, I switched to the basic audio oscilloscope tool to try to figure out what was going on. After the first capture, it was pretty obvious that the data was being Amplitude (AM) modulated, with a carrier frequency of 19kHz.

Screen Shot 2015-12-23 at 2.14.16 PM

The data was so clearly AM modulated that I wished I had just popped open the scope to begin with (note to future self)! Here’s a scope capture with a few repeated packets coming through.

Screen Shot 2015-12-23 at 4.01.07 PM

After “configuring” a few different dash buttons and examining the transmitted data, I was getting confused as to why there was so much variation in the peak levels of the packets. I checked for ground loops and background noise before transmitting, and confirmed that the noise floor of my microphone setup was far below the variations in peak amplitude I was seeing. After staring at a few captures, I started to notice that the “variations” were consistent in their amplitudes. Looking some more, I realized that it wasn’t noise at all: the data was intentionally being sent with four distinct amplitude levels!

0000_0000 2 copy

Clever, clever Amazon is using Amplitude-Shift Keying (ASK) modulation with 4-level binary to send the data across to the Dash Button.

The big benefit to this modulation scheme is that it’s got a 2-to-1 compression ratio, so the packet length is theoretically half of the length of an FSK packet. The downside, however, is that the Signal-to-Noise Ratio is halved. This isn’t really a problem, since the data is sent 20 times, and the transmitter (iOS device) can be closely physically located to the receiver (Dash Button).

After these discoveries, I came to a few conclusions:

  • The data is being sent from the iOS app using an ASK modulation scheme, with a carrier frequency of 19kHz. It’s resent 20 times before moving on.
  • Each “bit” (really, two bits) has a nominal bit time of 4ms. There are four levels of bit amplitude and there is no true zero. Every bit level, including 00, has some amplitude associated with it.
  • The first chunk of data is always the same. It looks like a simple calibration sequence, allowing the button to set the decoding thresholds for later down the road.
  • There appears to be both a start and stop glitch on all of the packets. This could be a byproduct of how Amazon is building their ASK packets in-app, or the hardware codec starting and stopping on the iPhone. This glitch isn’t harmful, because the transmission is stable by the time any meaningful data is coming through.
  • The packets are not of a fixed length. Entering a longer SSID or passphrase results in a longer packet.

Now that I had a rough idea of how data was transmitted, I wanted to give decoding some known data a shot. This is where things got really interesting for me, because I’ve got basically no experience in data transmission or communications theory. Luckily, I have a decent eye for patterns, which helped considerably in figuring out what data was represented where in each transmitted packet. I began by choosing an SSID and passphrase that were fairly easy to recognize. I ended up using 7’s and *’s in various combinations and orders. I quickly started to recognize the waveforms of each coming through in the data, but it wasn’t immediately clear how the characters were being translated from their ASCII representation.

7_* (3) copy
Packet containing both 7 and *.

I was getting nervous that some type of encryption was being used on the characters to prevent bored nerds like me from easily snooping on the packets.

In an effort to bruteforce whatever translation was taking place, I sent the characters 1 through 9 in the password field. I assigned amplitude level “1” on the received data as binary 00, level “2” as 01, level “3” as 10, and level “4” as 11. I recorded the ASK levels of each character, and busted out a table of what the received binary data looked like in comparison to the known ASCII value of each character. The first thing that was clear was that the binary representation of each character definitely related to the next, which was good news. This ruled out any sort of encryption or lookup-table based character set. The next observation was that the binary data was decrementing, rather than incrementing as the transmitted ASCII characters should be. It was also evident that it was somehow scrambled or flipped from the known representation.

After a bit of bit order manipulation, I arrived at three conclusions:

  • The levels I picked (level “4” as 11, and level “1” as 00) were incorrect. Flipping these levels yields non-inverted bits, which then results in upwards-counting binary data.
  • Each 8-bit ASCII representation of a character was actually being transmitted “backwards” from how I expected, with the first 2-bits transmitted representing the LSB end of the ASCII character. Characters themselves are transmitted in the order they are entered.
  • Each block is 4 pulses long, which represents a total of 8 bits of data.

Armed with the encoding info, my final task was to write a piece of software which would listen to the audio sent by the iOS app and decode it into various representations. Doing it by hand was fun for a bit, but got tedious quickly. I rather arbitrarily settled on MATLAB, mostly because it’s easy to interface with audio components, manipulate WAV data, and filter and analyze datasets. I also figured it would be a good way to sharpen up my MATLAB since it’s been a bit since I’ve fired it up.

With a few hours of coding, I’ve got a script that can listen via my external mic, trim the acquired data to a single packet (albeit semi-manually), and separate and decode each block into it’s decimal, hexadecimal, and ASCII representations. It then saves this as a CSV file.

To to this, the MATLAB utilizes the built-in MATLAB AudioRecorder function. It then waits for user input in regards to the bounds of a single packet. With these, it trims the data and performs some simple filtering and peak detection. The peak detection is done using a Hilbert Transform (a very common and useful digital peak detection method). It then finds each subsequent peak and indexes them based on their amplitude to find the corresponding binary data.

Captured and trimmed audio data displayed in MATLAB.
Captured and trimmed audio data displayed in MATLAB.
The same packet after filtering and peak detection. Each level of peak is indicated with a different colored symbol.
The same packet after filtering and peak detection. Each level of peak is indicated with a different colored symbol.

I also (for no good reason) wrote a tool that goes in the reverse: punch in an array of 4 levels (1/2/3/4), and out comes a psudeo-ASK representation of it.

Because why not?
Because why not?

Using these software tools and a several packets, I discovered a few things:

  • The first two blocks of hypothesized “calibration sequence” is definitely that. They’re 10 bits each, which doesn’t match the rest of the packet. I’ve looked at hundreds of packets and they all start the same way. My MATLAB code actually uses these to find out where to start looking for real data. Handy!
  • Block 3 (Decimal rep) is the total length of the data which will come after it, in “number of blocks”.
  • Blocks 4-9 in every packet appear to be some sort of UDID/CRC. I’ll come back to this later.
  • Block 10 (Decimal rep) is the length of the SSID, in blocks.
  • Block 11 (ASCII rep) is the first char of the SSID. In this example, it’s only one character long.
  • Block 12 (Decimal rep) is the length of the passphrase. This isn’t always block 12, it’s dependent on whatever the length of the SSID is. It’s also always present immediately after the SSID, regardless if there’s a passphrase or not. If there isn’t, it’s just decimal 0, indicating that there is no passphrase.
  • Block 13 (ASCII rep) is the first char of the passphrase, if it exists. It’s also only one char long in this case.
Various blocks numbered by order of occurrence.
Various blocks numbered by order of occurrence.
Hypothesized purpose of each block of data.
Hypothesized purpose of each block of data.

The last real question remaining is: what are blocks 4-9? In every packet I sent, they were different. I immediately thought some sort of CRC but the packet changed at times when I didn’t change the SSID or the passphrase, so it’s hard for me to tell. I’m leaning toward a on-demand Unique Device identifier (UDID) generated in the iOS app, potentially in combination with a CRC. With 48 bits to spare, a 32 bit UDID along with a 16 bit CRC seems more than reasonable.

With this scheme, device setup would look something like this:

Slide1

  • User logs into their Amazon account from the app. This takes place every time a Dash Button is configured. Amazon then generates a “short” (<=48 bits) UDID for the Dash Button which associates it with an Amazon Account. They also store this somewhere on their servers.
  • The SSID and passphrase for the Wi-Fi connection are sent via audio packet to the Dash Button, along with the UDID that was just generated.
  • The Dash Button parses the data and attempts to connect to the Wi-Fi network. If it’s successful, it phones home to the Amazon servers with the supplied UDID. The Amazon servers “register” the button as active and tell the iOS app to continue setup.
  • From here, any further configuration data is sent to the button over the network, including what account is registered to the button (likely with more sophisticated verification than I’m alluding to*), what product the button is ordering, and shipping preferences.

*Just looking at the string dumps from the Dash Button firmware show that there is more sophisticated authentication taking place, it’s just hard to say when. I’m tempted to decompile the firmware just for fun, but I’ve already spent enough time looking at this damn $5 button…

And of course, here’s the final outcome of my efforts:

BOOM!
BOOM!

I’ve attached my MATLAB code in the off chance anyone wants to try this at home. It’ll probably take some tweaking for your specific setup.

Here’s the MATLAB code on GitHub.

That’s all I’ve got so far. I’m still curious in figuring out the six mystery blocks: if you’ve got any thoughts on it feel free to let me know. I might make another followup post taking a look at the firmware using IDA or something in the future, we’ll see. And of course if any Amazon employees want to get ahold of me and tell me how far off I was, I’d be okay with that too 🙂

Thanks to Matthew Petroff, GitHub user dekuNukem, and anyone else whom I may have forgotten to credit.

EDIT: It’s been pointed out to me by a few looking deeper into the button’s internals that the modulation scheme actually IS FSK with four carriers at 18130, 18620, 19910, and 19600Hz. I believe the reason why it so strongly resembled ASK when I observed the audio packets is because of the awful frequency response at the higher end of my phone, my mic, or both. A linear attenuation right at the top of the audible spectrum would explain the highest frequency being measured as lower amplitude. That being said, all encoding and modulation schemes still apply, with the highest frequency encoding representing binary 11.

In addition, there is in fact a CRC16 attached to each packet. It’s the first two bytes after the packet length declaration. Also, that length byte includes the length of the two bytes of CRC. That leaves 32 bits for the UDID, which is POSTed to the Amazon servers at http://dash-button-na.amazon.com/2/r/oft?countryCode=XX&realm=XXAmazon where XX us US for the United States, DE for Germany, etc.  This jives quite strongly with my initial guess of button registration. Thanks to Benedikt Heinz (@EIZnuh) for sharing some of his research into the button’s firmware!

3D Printer Review: QU-BD OneUp

Background

I’m a Senior at the University of Colorado, earning a Bachelors Degree in Electrical Engineering. While working on our senior capstone project, my team has used done enough 3D printing for me to consider purchasing one myself. One of my teammates has access to an expensive Stratasys Objet, which has amazing print quality but costs an arm and a leg to fill with resin, and is always in use since it’s owned by the company he works for. After following the 3D printing buzz for several years, I decided now would be a good time to jump in. Printers are affordable and easy to get ahold of, and there are plenty of brands and models to choose from.

I’d been researching different printers for a while, so I had an idea of what type of printer I wanted to invest in. I liked the early MakerBots, but their push to closed-source consumer turned me off. The formlabs form1 is a beautiful and very capable printer, but the cost is too high, and I’d like something that I can tinker with. There are a bunch of other printers, like the XYZprinting Da Vinci, that have a great design but have one or two fatal flaws (The Da Vinci, for example, has proprietary filament “cartridges”, which can be manually reloaded but are a apparently a big pain). Eventually, I found a Kickstarter printer that I’d bookmarked about a year back, the QU-BD OneUp.

Screen Shot 2014-12-29 at 4.00.05 PM

The QU-BD OneUp and TwoUp are the exactly the type of printer I had been looking for: relatively inexpensive, open source, and self-assembled with a large user base and many publicly available improvements and mods. Assembling the printer is a big hurdle for some, but I’m all about it. There’s no better way to figure out how something works that putting it together yourself. It also lends to easier diagnosis and repair when something inevitably doesn’t work right.

The reception from the Kickstarter campaign was lukewarm; many early backers were rubbed the wrong way when their printers were long delayed and missing various parts, but those who got their printers assembled were overall pleased with the construction and print quality. I’d read more than a few upset posts on the fabric8r forums regarding poor communication and arguably slimy behavior on part of QU-BD. Much of this behavior I chalked up to needy or whiny Kickstarter backers. At this point, it’s pretty much common knowledge that almost nothing on Kickstarter ships on time. I multiply any promised product ship date by two. Early backers can be entitled and unforgiving when ship dates slip, and I took that into account when reading product reviews. The Kickstarter campaign ended over a year ago, and it appeared that all backers had received their printers. Erring on the side of caution, I decided to spring for the $199 OneUp rather than the $299 TwoUp in the off chance that I did have an unpleasant experience.

Shipping

I ordered my OneUp on November 8th, and QU-BD gave an estimated shipping time of 2-4 weeks. Then came the waiting. Following my rule of thumb, I estimated 6-8 weeks for delivery, and planned on getting my printer as an early Christmas present to myself. After all I’d read, I knew it would be slow to ship. And boy, was it slow to ship. It took just over three weeks for the order to change to the “preparation in progress” stage on December 1st. From there, another 10 days until the order was actually marked as “shipped”.

Screen Shot 2014-12-29 at 11.47.29 AM

This is one stage in the order process where I can sympathize with some upset customers. When an order is marked as “shipped”, one would assume that means it’s in the mail. Not quite. I don’t know exactly how QU-BD runs their shipping, but I’d guess they pile a bunch of orders together, and then schedule one big pickup date. My order, for example, was marked as shipped on December 11th. The pre-shipment information was sent to USPS on December 12th, but the order wasn’t actually picked up by USPS until the 19th of December.

Screen Shot 2014-12-29 at 11.47.59 AM

My OneUp was shipped via Priority Mail 2-Day, and was scheduled for delivery by Monday, the 22nd of December. However, I grew concerned when the tracking info wasn’t updated after the initial departure scan. Monday came and went, and (to no one’s surprise), there was no delivery. I suspect the package was knocked to the side at a sorting facility and got ahold of USPS. To their credit, the package was found, and was sitting on my front porch come Christmas eve.

Photo Dec 24, 3 14 25 PM

 

Here’s my take on QU-BD’s shipping process, which some think might be their biggest weakness. Yes, it’s painfully slow. Was I upset about the shipping time? No. In the end, I got the printer, and the small mix-up in the mail was in no way a fault of QU-BD. In the era of free Amazon 2-day shipping, it’s easy to get spoiled and frustrated with the long lead times and slow shipping of a smaller, family-run company. But you’re paying almost nothing for one of these printers. QU-BD is basically sourcing all of the parts, bundling them together, and shipping them out for less than you, as an individual consumer, could do on your own. Not to mention that lead times on equivalent parts from China would be equal to, if not greater than, the lead time of the printer from QU-BD. These guys are making razor-thin margins on these printers by selling them so cheap. Moreover, from my understanding, there are only a few of them running the whole operation. I absolutely agree that faster shipping times would be nice, but as consumers, we can’t have our cake and eat it too. I could have a comparably-spec’ed printer sitting on my desk tomorrow, shipped with FedEx Next Day Air, but I wouldn’t be paying 200 dollars for it. It would be more like $1500. In the end, if you want a decent, but inexpensive printer, you’ll probably end up waiting a while for it. If you want it right now, it’s going to cost you more, plan and simple.

Assembly

Upon opening the box, I found it very nicely packed.

Photo Dec 24, 3 15 24 PM

QU-BD is able to fit a surprisingly large amount of parts into the medium flat-rate box they ship in.

Photo Dec 24, 3 16 51 PM

Nothing has room to shift around too much during shipping, which in theory should keep all the parts intact. That being said, a few of the thinner MDF parts in my kit came fractured. They all snapped at very thin points. None were completely broken, however, so they were saved with a little bit of CA glue. I took a quick inventory, and, as expected, I was missing some parts. Namely, my kit didn’t have the 6 M3 flat washers or the 4 M4x25 hex head bolts. QU-BD sends out one shipment of missing hardware for free, so I wasn’t too torn up about it. I ended up going to my local hardware store and getting all of the missing parts for less than a buck, since I just wanted to put it together. As many build logs suggested, I cleaned off all of the machined rods and laser cut parts before beginning.

Photo Dec 25, 12 06 44 PM

The actual assembly of the printer was, in my honest opinion, a total blast. As a kid who blew through countless Lego sets, it felt very familiar. The instructions are well-made and pretty easy to follow. Some of the ALL CAPS assembly notes feel a bit “yell-ish”, but they did a good job of catching my attention. The part naming convention was also a bit strange, but it’s clear they were named from an engineering perspective.

Screen Shot 2014-12-29 at 12.41.26 PM

During my assembly, a few simple things came to mind that would really improve the build experience. The first is labeling of individual hardware bags with their part name and designator. I kept having to flip back and forth to the BOM at the beginning of the assembly to verify I was using the correct part. Furthermore, actually identifying the parts inside their bags was a little tricky. I was able to figure out what parts were which using a Bolt Size-It gauge, but some buyers might have a harder time. A laser-printed sticker on each bag would cost almost nothing and make a big difference.

bag

In addition to labeled part bags, a small identifier on each MDF part would be super useful. Depending on the part, this could be etched by the same laser cutting machine QU-BD uses to cut the parts, or a small sticker. Once again, it would be a simple and cheap change, but would really improve the user experience.

label

Lastly, a single-page spreadsheet BOM, either shipped with the parts, or included in the manual would rock. Cross-checking parts against their name and designator would take about half the time as it currently does. Once again, these aren’t deal breakers, but would be easy changes for QU-BD. I have a hunch they might also help QU-BD ship all of the parts in the kit.

All in all, assembly went pretty smoothly. I do have a few minor quality gripes. My Geeetech Printrboard clone came with a very poorly soldered SD card slot which probably shouldn’t have made it past QC (assuming they have QC). I have access to a solder reflow station in the lab I work in, so I’m just going to heat it up and fix it myself. Not a huge deal considering the board works flawlessly otherwise.

Photo Dec 24, 3 43 29 PM

I also had a bit of a laugh at the some of the acrylic parts that ship with the kit. What was my Y-drive cut with, scissors? Again, no loss, since I plan on printing a well-proven Y-drive upgrade on Thingiverse once I get more filament.

Photo Dec 25, 9 57 21 PM

I didn’t include too many pictures during assembly, since it’s rather uninteresting. Here’s a few pictures of the major milestones:

Photo Dec 26, 1 03 01 PM

Completed extruder and hotend, attached to the X-drive. I had some 1/8” TechFlex sleeving laying around and covered my extruder and thermistor wires. I really like the look, and it keeps the wires bundled and out of the way. I’m planning on removing the ugly, stiff plastic covering on all of the stepper wires and replacing it with the same sleeving. If it’s one thing this printer design lacks, it’s cable management!

Photo Dec 26, 1 02 33 PM

Completed base and Y-Drive assembly.

Photo Dec 26, 11 06 22 PM

Final printer assembly completed! All in all, it took me about 4 hours to assemble the entire printer, barring breaks, distractions, and trips to the hardware store.

Setup and first print

During my ~2 month wait, I’d taken the liberty of setting up Repetier Host and Slic3r on my MacBook. I watched a few YouTube videos [1] [2] that helped me set up the host software with little trouble. I connected the printer, heated the extruder, fed in some filament, and….nothing came out. After being sufficiently stumped for about an hour, I took off the extruder nozzle and found it was severely clogged with brass filings from the milling process. It took me forever to unclog it, even with a 400 micron drill bit. I eventually got it unclogged, but in the process, I believe the nozzle was damaged. The hole is slightly larger than the original 0.4mm, and I suspect it’s not perfectly round. I’m hoping that I can get a replacement from QU-BD since, after all, it did come in an unusable form.

Photo Dec 26, 11 18 15 PM

After unclogging the nozzle, I ran a quick test print of a 20x20x20mm cube. The printer was working! After some tuning in Slic3r and Repetier Host, I’m getting decent print quality, even with my damaged nozzle. I didn’t bother calibrating for the filament, since I only had a few meters of the stuff. Once my shipment of Hatchbox filament arrives, I’ll calibrate for flowrate and do some temperature tuning.

Photo Dec 29, 1 39 09 PM

So far, I’ve printed a small fan bracket and fan guard for the 40mm fan that’s included in the kit. I’m pretty happy with the quality of these first-day prints!

Photo Dec 29, 1 39 17 PM

I also added a few ultra-bright blue LEDs under the extruder stepper to light the bed. They’re just tied to a 5V output on the Printrboard’s expansion headers. They illuminate the build platform, and they also do a good job of looking cool. Everyone loves blue LEDs!

Final Thoughts

All in all, I’m really happy with my QU-BD OneUp. The price, arguably the lowest in the industry, is hard to beat. Even though there are some tradeoffs for such an inexpensive printer, I believe what you get for your money is quite good. If you do decide to get this printer, just be prepared for the long wait time before you see it at your front door. QU-BD might be in over their heads in terms of order quantity, but you will eventually get your printer. Even then, it’s probably 8-12 hours away from being able to print anything. As I mentioned earlier, you’d be hard-pressed to find all of these parts on your own for the same price as this kit. Even though the setup isn’t for the faint of heart, it’s more than doable for anyone with a little bit of patience and some basic problem-solving skills. Not to mention that my day one prints rival those of an out-of-box MakerBot, which is pretty impressive. All things aside, I’m happy with my decision to get this printer. Not only is it a good entry-level printer, but I’ve learned a ton about 3D printing.

IMG_3304

It’s also worth nothing that I’m not affiliated with QU-BD in any way. I just wanted to share my experience with QU-BD and the OneUp so far. Do you have an experience with QU-BD or any other 3D printer you’d like to share? Leave a comment below. I’d like to hear what you’ve got to say.

 

Update 1: I got ahold of QU-BD and asked about a replacement nozzle. Chelsea was super quick to get back to me and very professional. I had a new one sitting at my doorstep in just a few days! The quality of this one is much better and I can verify that the prints are already coming out cleaner than before. Whoo!

Dashboard Faceoff: Tesla Model S vs. Porsche Panamera

 

It’s long been a debate amongst auto enthusiasts: simple and clean, or complex and fancy? When it comes to the dashboard, it’s a hard question to answer. Some will argue that a complicated and overwhelming array of dash controls is reminiscent of a fighter jet; there’s something to be said about a pilot who can seamlessly master all of those controls while keeping control of the vehicle. On the other hand, all of those buttons, knobs, and switches just look so ugly and cluttered. The argument between more and less seems far too personal and opinion based to tackle with simple persuasion. What’s the solution, then? Science.

To illustrate the sharp contrast between both sides of the dashboard spectrum, the Porsche Panamera and the Tesla Model S were chosen as the main subjects of this article. The reason: they’re about as different in interior design taste as it gets, yet, so similar. Both the Model S and the Panamera are in the Luxury Sedan class, both sport Italian leather interiors, and both cost more than I can afford. Once you get to the dash, however, all similarities end.

tesla-model-s

qVfLBHfRkqxflDyCrTgN_Porsche_Panamera
The Tesla Model S struts its stuff ocean side on top, while the Porsche Panamera shows off on bottom.

The Panamera is Porsche’s new take on their classic coupes. Every aspect of their typical two-door model of car is somehow incorporated into the Panamera.  One could argue that Porsche wanted to keep the culture it has developed for itself largely intact; critics had argued that a four door Porsche just wasn’t a Porsche. But they’ve done it, and it’s turned out quite well. When it comes to the interior, the Panamera is about as classic and conservative as they come. The leather comes in whatever color you’d like, as long as it’s tan. And the dashboard, well, it’s….busy. Every single function has its own button. Every. Single. Function. But is this such a bad thing?

porsche-panamera-interior-3
The Panamera’s clean, yet complex dash array.

The Tesla, on the other hand, is refreshing. The Model S is arguably the first cool, stylish, and reliable Electric Vehicle (EV) to come into the automobile scene. In a time when the Nissan Leaf and the Chevy Volt are struggling to stay on the market, the Model S had been on backorder since 2009. Tesla bases their whole design on technological marvel, so why should the dashboard be any different? At the heart of the Model S dash lies a 17 inch touchscreen display, and absolutely nothing else. Every feature of the vehicle, from changing the radio station to turning on the climate control is manipulated from this single, high tech interface. The only button you’ll find anywhere near the dashboard is the hazard switch, and it’s probably only there as per some regulation in the state of California. Does it make sense now why the Model S is being compared to the Panamera?

tesla-model-s-touch-screen
The Model S sports an incredibly minimal design.

Now that we’re familiar with our contestants, let’s get down to business. Is there really any way to determine which design style is “better”? The type of interior you prefer may be rooted deeply in personal ideals, but we can look to design theory to give us some ideas about what’s generally a good idea and what might not be. We’ll take a look at the Panamera first.

porsche-panamera-interior-4
Though overwhelming, the large collection of controls doesn’t look terrible on the Panamera.

As mentioned earlier, Porsche took the classical route with the Panamera’s interior. The Panamera has as many buttons as functions, and while it may be overwhelming, it might not actually be a bad thing. The reasoning behind this? Complexity. Complexity is commonly split into two types. There’s visual complexity (which the Panamera has plenty of) and then there’s operational complexity. The catch is that minimizing one maximizes the other. In the case of the Porsche, though it might be visually overwhelming and awfully complex, it’s actually very operationally simple. Want the AC on? Reach over and press the button. Want it in sport mode? There’s a switch for that. The simplicity of operation goes hand-in-hand with user memory. Very quickly, a driver will remember by muscle memory where the controls for an often-used function are. Making changes on the fly becomes a second nature, with almost no brainpower spared on navigating to the control. What’s the outcome of this design? Though one might be initially displeased with the seemingly unnecessary amount of buttons, knobs, and switches, operation of the controls is very straightforward and requires very little work other than initially searching for the button one needs. There’s also the issue of immediate visibility. There’s no question to what features your shiny new Porsche comes with….because they’re all sitting in front of you, on their own little button. It’s easy to remember that the vehicle comes with a sport mode when the Sport button is just begging to be pressed.

No argument is complete without a counter argument, and that’s what the Tesla does best. It screams positive change and throws the most basic of automotive constraints out the window (cough, *gasoline*, cough), so don’t think for a second Tesla would be content with putting a “basic” or “boring” interior in their vehicle.

Screen Shot 2013-10-06 at 12.50.42 AM
The Model S interior is high-tech, sleek, and most importantly, pleasing to the eye.

If the Panamera is at one end of the complexity curve, the Model S is as far away as can be. A switch for every single feature has been replaced with a single touchscreen that does them all. What’s this mean for a user? The interior looks pretty, since it’s visually about as simple as it can be. Any simpler, and the features just wouldn’t be included. This comes at a cost, though, as the operational complexity of the Model S is massive. When the driver is listening to the radio and wants to turn it down, they must navigate from menu to menu until the control they’re looking for is reached. It’s easy to imagine that a user needs to be good at multitasking if they want to operate a Model S. There’s overhead associated with finding a specific control; the operator needs to take their eyes off of the road in order to make sure they’re moving through the menus correctly. Contrary to the Panamera, the Tesla’s large array of menus can bury the more subtle functions. It might not be totally obvious that you can turn off the in-seat air conditioning, for example, because the option to do so is deep inside several menus.

The large touchscreen is visually pleasing, but is it functionally usable?
The large touchscreen is visually pleasing, but is it functionally usable?

How does Tesla ensure that the Model S isn’t touchscreen hell? Simple. They just made it bigger. 17 inches is large enough to put several independent sets of controls on the display, so a user can change the temperature of the climate control whilst still viewing their route on the navigation system. Tesla cleverly structures their menus so that the relevant controls are displayed, while unnecessary ones are hidden. A dynamic system is responsible for keeping the visual complexity to a minimum, while also reducing the operational complexity. When you’ve got control of these things in software (rather than a real, physical switch), you can cheat the complexity curve, and that’s exactly what Tesla is aiming to do.

The last area where both of these designs vary greatly is in the area of feedback. Feedback is paramount to a good user experience. In the Panamera, the user can simply feel for the mechanical “click” of the button or switch and know it’s been activated or vice versa. When interacting with a touchscreen, there is no immediate mechanical indicator that an action went through successfully. The display on the Model S plays a sound, but there’s something crucial about that satisfying pop that’s still missing.

After pushing and prodding on the Tesla's display, these buttons feel like something that's been silently missing.
After pushing and prodding on the Tesla’s display, these buttons feel like something that’s been silently missing.

So, what’s the verdict? In a world where rapidly advancing technology is quick to replace anything and everything in it’s path, the Model S seems like it should have the arguably “better” dashboard. In the same way the iPhone has replaced all but the most stubborn QWERTY keyboard phones with real keys, the Tesla seems poised to upend the design of car interiors. However, by using the ideas and concepts of intelligent design, it becomes apparent that the visual simplicity is hiding an ocean of operational complexity. The Porsche, which at first glance seems archaic with its overwhelming amount of controls, actually might still have the upper hand. In the end, the ability to reach out and find a familiar control without sparing a second thought might outweigh the clean, buttonless design of the Model S. Though, I have to say, the Model S does a better job of eradicating buttons and knobs that I could have ever imagined.

It’s almost usable.