I
started writing a blog in May 2016, partly because I kept writing rants
on Facebook that apparently were “too good not to be online somewhere”,
and partly because I was bored after my Master’s degree and wanted
something to do with my Sunday mornings.
Sleeping in, of course, was never an option.
18
months later, and I’ve written about 100,000 words, been published in
all sorts of places, and am now getting regular offers to pitch to major
publications — more on this in the coming months.
And
most importantly of all, I got to 10,000 followers. This time last
year, it was 100 and about half of them were related to me.
All in all, it’s been a good year.
So
what’s in store for the Health Nerd? You’ll be happy to know that this
year I’ve applied for a PhD with the University of Wollongong, which is
actually super exciting and not scary like it feels to me sometimes. I’m
also going to be — hopefully — releasing some episodes of a podcast
that I’ve started with a brilliant co-host. The topic will be science in
the media and I’m really excited to introduce all of you to my dulcet
tones over the airwaves.
I’m so much less awkward than I am in text.
What
does all of this activity mean to the blog? Nothing! I’ll still be
aiming for my regular one health story a week on Medium, as well as an
extra member’s-only article a month for all you subscribers who love
that extra content.
To
sum up, I’d just like to say thank you to you all. I’d never have made
it here without all you brilliant people following me and making this
all worthwhile. It was a fantastic 2017, and 2018 shows every sign of
being brilliant as well.
Everyday
I commute to work or drive I notice the amount of drivers that still
text and drive. Observing drivers next to me as I pass them by while
they go below the speed limit to “stay safe,” or seeing the person in
front looking down every two seconds from their side view mirror, It’s
alarming.
Who
or what so important to risk lives for over a text message? Boss?
Significant other? Do you think their recipient would continue the
conversation knowing they were on the road? Probably not.
If
you’ve ever texted and drove while with friends, did they tell you to
stop? I hope they did, or you need new friends; just kidding, maybe…
Simple Implementation
If
a sender is going at a speed faster than 10 miles per hour, the
recipient would see a message displaying the speed below. (See image)
Privacy
Is
it a privacy concern to allow recipients of messages to see you’re
driving? I think not. Don’t want them to know? Then don’t text them!
With
the new iOS driving feature that auto texts back, perhaps the speed
doesn’t need to display since it was an auto generated text message.
Disable / Passenger?
If
a passenger is texting, well…it will also show your speed, however it
could be possible to implement a feature that only shows the speed once.
Another option would be if a user types along the lines of “no I am not
driving” it stops displaying speed.
Using
the honor system and the thought that someone wouldn’t lie to a loved
one about texting and driving, this feature would be quite effective to
help stop people from texting while driving.
Empathy through others.
Imagine
you’re texting your significant other, parents, or siblings, someone
you care about. The speed is displayed so they ask if you’re
driving — are you really going to lie to continue the conversation? I’d
hope you stop, or your recipient stops responding.
It’s
a very simple implementation that I feel through empathy of people that
care about you, would cause offenders to stop. If it’s really important
call them hands free!
What
are your thoughts? Be sure to follow me as I do a larger case study on
this idea that rewards drivers for not texting and driving!
These
are my opinions on where deep neural network and machine learning is
headed in the larger field of artificial intelligence, and how we can
get more and more sophisticated machines that can help us in our daily
routines.
Please
note that these are not predictions of forecasts, but more a detailed
analysis of the trajectory of the fields, the trends and the technical
needs we have to achieve useful artificial intelligence.
Not
all machine learning is targeting artificial intelligences, and there
are low-hanging fruits, which we will examine here also.
Goals
The
goal of the field is to achieve human and super-human abilities in
machines that can help us in every-day lives. Autonomous vehicles, smart
homes, artificial assistants, security cameras are a first target. Home
cooking and cleaning robots are a second target, together with
surveillance drones and robots. Another one is assistants on mobile
devices or always-on assistants. Another is full-time companion
assistants that can hear and see what we experience in our life. One
ultimate goal is a fully autonomous synthetic entity that can behave at
or beyond human level performance in everyday tasks.
See more about these goals here, and here, and here.
Software
Software is defined here as neural networks architectures trained with an optimization algorithm to solve a specific task.
Today
neural networks are the de-facto tool for learning to solve tasks that
involve learning supervised to categorize from a large dataset.
But
this is not artificial intelligence, which requires acting in the real
world often learning without supervision and from experiences never seen
before, often combining previous knowledge in disparate circumstances
to solve the current challenge.
How do we get from the current neural networks to AI?
Neural network architectures
— when the field boomed, a few years back, we often said it had the
advantage to learn the parameters of an algorithms automatically from
data, and as such was superior to hand-crafted features. But we
conveniently forgot to mention one little detail… the neural network
architecture that is at the foundation of training to solve a specific
task is not learned from data! In fact it is still designed by hand.
Hand-crafted from experience, and it is currently one of the major
limitations of the field. There is research in this direction: here and here
(for example), but much more is needed. Neural network architectures
are the fundamental core of learning algorithms. Even if our learning
algorithms are capable of mastering a new task, if the neural network is
not correct, they will not be able to. The problem on learning neural
network architecture from data is that it currently takes too long to
experiment with multiple architectures on a large dataset. One has to
try training multiple architectures from scratch and see which one works
best. Well this is exactly the time-consuming trial-and-error procedure
we are using today! We ought to overcome this limitation and put more
brain-power on this very important issue.
Unsupervised learning
—we cannot always be there for our neural networks, guiding them at
every stop of their lives and every experience. We cannot afford to
correct them at every instance, and provide feedback on their
performance. We have our lives to live! But that is exactly what we do
today with supervised neural networks: we offer help at every instance
to make them perform correctly. Instead humans learn from just a handful
of examples, and can self-correct and learn more complex data in a
continuous fashion. We have talked about unsupervised learning
extensively here.
Predictive neural networks —
A major limitation of current neural networks is that they do not
possess one of the most important features of human brains: their
predictive power. One major theory about how the human brain work is by
constantly making predictions: predictive coding.
If you think about it, we experience it every day. As you lift an
object that you thought was light but turned out heavy. It surprises
you, because as you approached to pick it up, you have predicted how it
was going to affect you and your body, or your environment in overall.
Prediction
allows not only to understand the world, but also to know when we do
not, and when we should learn. In fact we save information about things
we do not know and surprise us, so next time they will not! And
cognitive abilities are clearly linked to our attention mechanism in the
brain: our innate ability to forego of 99.9% of our sensory inputs,
only to focus on the very important data for our survival — where is the
threat and where do we run to to avoid it. Or, in the modern world,
where is my cell-phone as we walk out the door in a rush.
Building
predictive neural networks is at the core of interacting with the real
world, and acting in a complex environment. As such this is the core
network for any work in reinforcement learning. See more below.
We
have talked extensively about the topic of predictive neural networks,
and were one of the pioneering groups to study them and create them. For
more details on predictive neural networks, see here, and here, and here.
Limitations of current neural networks
— We have talked about before on the limitation of neural networks as
they are today. Cannot predict, reason on content, and have temporal
instabilities — we need a new kind of neural networks that you can about read here.
Neural Network Capsules are one approach to solve the limitation of current neural networks. We reviewed them here. We argue here that Capsules have to be extended with a few additional features:
operation on video frames:
this is easy, as all we need to do is to make capsules routing look at
multiple data-points in the recent past. This is equivalent to an
associative memory on the most recent important data points. Notice
these are not the most recent representations of recent frames, but rather they are the top most recent different
representations. Different representations with different content can
be obtained for example by saving only representations that differ more
than a pre-defined value. This important detail allows to save relevant
information on the most recent history only, and not a useless series of
correlated data-points.
predictive neural network abilities:
this is already part of the dynamic routing, which forces layers to
predict the next layer representations. This is a very powerful
self-learning technique that in our opinion beats all other kinds of
unsupervised representation learning we have developed so far as a
community. Capsules need now to be able to predict long-term
spatiotemporal relationships, and this is not currently implemented.
Continuous learning
— this is important because neural networks need to continue to learn
new data-points continuously for their life. Current neural networks are
not able to learn new data without being re-trained from scratch at
every instance. Neural networks need to be able to self-assess the need
of new training and the fact that they do know something. This is also
needed to perform in real-life and for reinforcement learning tasks,
where we want to teach machines to do new tasks without forgetting older
ones.
Transfer learning
— or how do we have these algorithms learn on their own by watching
videos, just like we do when we want to learn how to cook something new?
That is an ability that requires all the components we listed above,
and also is important for reinforcement learning. Now you can really
train your machine to do what you want by just giving an example, the
same way we humans do every!
Reinforcement learning — this
is the holy grail of deep neural network research: teach machines how
to learn to act in an environment, the real world! This requires
self-learning, continuous learning, predictive power, and a lot more we
do not know. There is much work in the field of reinforcement learning,
but to the author it is really only scratching the surface of the
problem, still millions of miles away from it. We already talked about
this here.
Reinforcement
learning is often referred as the “cherry on the cake”, meaning that it
is just minor training on top of a plastic synthetic brain. But how can
we get a “generic” brain that then solve all problems easily? It is a
chicken-in-the-egg problem! Today to solve reinforcement learning
problems, one by one, we use standard neural networks:
a deep neural network that takes large data inputs, like video or audio and compress it into representations
a sequence-learning neural network, such as RNN, to learn tasks
Both
these components are obvious solutions to the problem, and currently
are clearly wrong, but that is what everyone uses because they are some
of the available building blocks.
As such results are unimpressive: yes we can learn to play video-games
from scratch, and master fully-observable games like chess and go, but I
do not need to tell you that is nothing compared to solving problems in
a complex world. Imagine an AI that can play Horizon Zero Dawn better than humans… I want to see that!
But this is what we want. Machine that can operate like us.
Our proposal for reinforcement learning work is detailed here. It uses a predictive neural network that can operate continuously and an associative memory to store recent experiences.
No more recurrent neural networks —
recurrent neural network (RNN) have their days counted. RNN are
particularly bad at parallelizing for training and also slow even on
special custom machines, due to their very high memory bandwidth
usage — as such they are memory-bandwidth-bound, rather than
computation-bound, see here for more details. Attention based neural network
are more efficient and faster to train and deploy, and they suffer much
less from scalability in training and deployment. Attention in neural
network has the potential to really revolutionize a lot of
architectures, yet it has not been as recognized as it should. The
combination of associative memories and attention is at the heart of the
next wave of neural network advancements.
Attention has already showed to be able to learn sequences as well as RNNs and at up to 100x less computation! Who can ignore that?
We
recognize that attention based neural network are going to slowly
supplant speech recognition based on RNN, and also find their ways in
reinforcement learning architecture and AI in general.
Localization of information in categorization neural networks — We have talked about how we can localize and detect key-points in images and video extensively here. This is practically a solved problem, that will be embedded in future neural network architectures.
Hardware
Hardware
for deep learning is at the core of progress. Let us now forget that
the rapid expansion of deep learning in 2008–2012 and in the recent
years is mainly due to hardware:
cheap image sensors in every phone allowed to collect huge datasets — yes helped by social media, but only to a second extent
GPUs allowed to accelerate the training of deep neural networks
And we have talked about hardware extensively before.
But we need to give you a recent update! Last 1–2 years saw a boom in
the are of machine learning hardware, and in particular on the one
targeting deep neural networks. We have significant experience here, and
we are FWDNXT, the makers of SnowFlake: deep neural network accelerator.
There
are several companies working in this space: NVIDIA (obviously), Intel,
Nervana, Movidius, Bitmain, Cambricon, Cerebras, DeePhi, Google,
Graphcore, Groq, Huawei, ARM, Wave Computing. All are developing custom
high-performance micro-chips that will be able to train and run deep
neural networks.
The
key is to provide the lowest power and the highest measured performance
while computing recent useful neural networks operations, not raw
theoretical operations per seconds — as many claim to do.
But
few people in the field understand how hardware can really change
machine learning, neural networks and AI in general. And few understand
what is important in micro-chips and how to develop them.
Here is our list:
training or inference? —
many companies are creating micro-chips that can provide training of
neural networks. This is to gain a portion of the market of NVIDIA,
which is the de-facto training hardware to date. But training is a small
part of the story and the applications of deep neural networks. For
every training step there are a million deployments in actual
applications. For example one of the object detection neural network you
can now use on the cloud today: it was trained once, and yes on a lot
of images, but once trained it can be use by millions of computers on
billions of data. What we are trying to say here: training hardware
matter as little as the number of times you trained compared to the
number of times you use. And making a chipset for training requires
extra hardware and extra tricks. This translates into higher power for
the same performance, and thus not the best possible for current
deployments. Training hardware is important, and a easy modification of
inference hardware, but it is not as important as many think.
Applications
— hardware that can provide training faster and at lower power is really
important in the field, because it will allow to create and test new
models and applications faster. But the real significant step forward
will be in hardware for applications, mostly in inference. There are
many applications today that are not possible or practical because
hardware, and not software, is missing or inefficient. For example our
phones can be speech-based assistants, and are currently sub-optimal
because they cannot operate always-on. Even our home assistants are tied
to the power supplies, and cannot follow us around the house unless we
sprinkle multiple microphones or devices around. But maybe the largest
application of all is removing the phone screen from our lives, and
embedding it into our visual system. Without super-efficient hardware
all this and many more applications (small robots) will not be possible.
winners and losers
— in hardware, the winner will be the ones that can operate at the
lowest possible power per unit performance, and move into the market
quickly. Imagine replacing SoC in cell-phones. Happens every year. Now
imagine embedding neural network accelerators into memories. This may
conquer much of the market faster and with significant penetration. That
is what we call a winner.
About neuromorphic neural networks hardware, please see here.
Applications
We
talked briefly about applications in the Goals section above, but we
really need to go into details here. How is AI and neural network going
to get into our daily life?
Here is our list:
categorizing images and videos
— already here in many cloud services. The next steps are doing the same
in smart camera feeds — also here today from many providers. Neural
nets hardware will allow to remove the cloud and process more and more
data locally: a winner for privacy and saving Internet bandwidth.
speech-based assistants
— they are becoming a part of our lives, as they play music and control
basic devices in our “smart” homes. But dialogue is such a basic human
activity, we often give it for granted. Small devices you can talk to
are a revolution that is
happening right now. Speech-based assistants are getting better and
better at serving us. But they are still tied to the power grid. The
real assistant we want is moving with us. How about our cell-phone? Well
again hardware wins here, because it will make that possible. Alexa and
Cortana and Siri will be always on and always with you. Your phone will
be your smart home — very soon. That is again another victory of the
smart phone. But we also want it in our car and as we move around town.
We need local processing of voice, and less and less cloud. More privacy
and less bandwidth costs. Again hardware will give us all that in 1–2
years.
the real artificial assistants
— voice is great, but what we really want is something that can also see
what we see. Analyze our environment as we move around. See an example here and ultimately here.
This is the real AI assistant we can fall in love with. And neural
network hardware will again grant your wish, as analyzing video feed is
very computationally expensive, and currently at the theoretical limits
on current silicon hardware. In other words a lot harder to do than
speech-based assistants. But it is not impossible, and many smart
startups like AiPoly
already have all the software for it, but lack powerful hardware for
running it on phones. Notice also that replacing the phone screen with a
wearable glasses-like device will really make our assistant part of us!
the cooking robot — the next biggest appliances will be a cooking and cleaning robot.
Here we may soon have the hardware, but we are clearly lacking the
software. We need transfer learning, continuous learning and
reinforcement learning. All working like a charm. Because you see: every
recipe is different, every cooking ingredient looks different. We
cannot hard-code all these options. We really need a synthetic entity
that can learn and generalize well to do this. We are far from it, but
not as far. Just a handful of years away at the current pace of
progress. I sure will work on this, as I have done in the last few
years~
The people have spoken! (But let’s run the numbers anyway).
On the 19th of December 2017, Jay Boston
hosted his own electric skateboard awards initiative. A cool little
idea, particularly considering it was the electric skateboard community
itself deciding who would receive the honors.
1,387
people participated in an online survey that decided the winners in
each category. Granted, I’m sure a lot of the respondents were
Australian, hence the results seemed a little top heavy towards boards
that are easily accessible to us here downunder. Hopefully the event
garners a little more international participation each year to help even
out the results a bit. There were categories where such boards as Metroboard, Carvon and Trampa
should have been mentioned, but they were no where to be seen!
Nevertheless, it’s a great initiative and will hopefully grow from
strength to strength in the coming years. A quick shout-out to Jay for
having me on as a guest — cheers mate!
The Enertion Raptor 2 was crowned the overall winner of the best electric skateboard of 2017 — as voted for by the people.
You can check out the video of the live event below:
Nominations
were only open to boards that had actually delivered production units
to customers in 2017. Enertion, with just under a couple of hundred
Raptor 2 units in the field at the time the awards were streamed, got in
by the skin of their teeth. However, the fact that the Raptor 2 won
tells us that those people who have a Raptor 2, as well as the multitude
of people who have tested the board on ride days and events, are
clearly very, VERY impressed with Enertion’s end result.
I
thought it might be interesting to compare the peoples choice with
something a little more academic, finishing off with a bit of commentary
regarding the results and any differences between them.
Below I’ve selected what are arguably the 10 most popular production boards of 2017.
(Boards
selected are single and dual drive boards in street configuration only.
This analysis is focused on the upper end of the market towards boards
that might be considered “premium” or “top-tier” by companies owned and
operated from such places as the United States, Australia and Europe).
Top Speed: 24mph (38kph)| Range: 25 miles (40km)| Hills: 30% | RRP: $1899 USD
A couple of notes on the above:
All prices are RRP in USD (specials, sales, shipping, taxes and other
fluctuations are not taken into consideration). All specs are taken
directly from the US or international websites of the board
manufacturers themselves (correct as of December 2017). Boosted finally
announced the release of their extended range battery in late 2017,
which “doubles the range”. However, not only is the extended range
battery not a standard item, I don’t think anyone outside of a few
YouTubers actually got their batteries in 2017. It should be noted that
Carvon have a second EVO V4 Dual model called the ‘XL’, which has the
same range, a lower top speed of 35mph, but a much higher hill climbing
capacity of 25%, which rivals many of the other boards on this list. It
comes at a cost of $100 more than the standard EVO V4 Dual at $2099 USD.
The ‘XL’ was not included in this comparison as to my knowledge no (or
very few) units made it into the hands of the public in 2017. I even
debated on whether or not to include the regular EVO (known as the
R-Spec), as there’s barely any units in public hands, but they are out
there. The listed top speed of the Evolve boards is taken from the known
achievable top speed on 97mm wheels, the most popular wheel choice for
Evolve riders and the standard wheel size on the GTX. As the Bamboo GT
and Carbon GT come with 83mm wheels as standard, the RRP has been
adjusted to include a set of ABEC11 97mm Flywheels as priced on the Evolve USA website
(109.99 USD) in both circumstances. The Mellow Board lists a range
bracket between 7.5 and 10 miles on their website. For the sake of
simplicity I chose 8.5 miles as somewhere in the middle. Like Evolve,
the top speed spec of the Metroboards is based on the 97mm wheel option
in both circumstances. Both Metroboards in this comparison have been
tricked out — 97mm wheels for both, 10 watt lights for both and the
single drive has the biggest battery available included in the
comparison. Metroboard hill climbing specs are estimates as they’re not
included on the Metroboard website. The single drive is known to rival
Boosted’s and Evolve’s (25%), so by virtue of that knowledge the dual
drive must exceed this (30% or more).
*Please see further notes about Mellow Board pricing in the ‘Pricing’ section of this article.
Ranking System Used
In each category (top speed, range, hills and RRP) each board is given a number from lowest to highest based on a best-to-worst order: 1 being the best/cheapest then ascending in score until we get to the worst/most expensive.
The board with the lowest scores are the best in each category and overall (avg).
Top Speed
The Carvon EVO V4 Dual
is the king of speed in 2017. There’s then quite a drop down to the
Enertion Raptor 2 in second place, which is still significantly faster
than the next bunch of boards — the Evolve line-up, which all punch out
the same top speed. The Mellow Board is hovering around the middle
followed closely by the two Metroboards, which each punch out the same
top speed. Down the bottom of the list we have the Boosted Board Gen2
Dual+ and the Inboard M1.
From
where I’m sitting I’d expect anything with a score of 3 to 5 to all be
very similar in real life. It’s really splitting hairs. From that
bracket it is a significant step up to the Raptor 2 and then an even
bigger step up again to the EVO (maybe too much?)
The Boosted Board and Inboard M1 are significantly over-rated in the speed department.
Range
There are five distinct categories here: We have the Metroboard single
that’s in a class of its own! Then we have the Evolve GTX and Carbon
GT, which essentially share the same battery. Next we have the
upper-middle class of range: The Carvon EVO, Enertion Raptor 2 and
Metroboard Dual. The Evolve Bamboo GT stands alone as a mid-range board
and our list ends with the low-range, swappable battery category of
boards. An optimist might consider the final category to be even better
than the ones above it, as swappable batteries can in reality mean
“endless range”. The problem being, of course, that more batteries
equals more $$$…
Hill Climbing
I’d
say we’re looking at four distinct categories of hill climbing here.
The first category is reserved for certified incline killers! The Enertion Raptor 2 and Metroboard Dual!
Then we have a range of aggressive hill climbers ranging from the
Evolve line-up, Boosted Board and Metroboard single. The Mellow stands
alone as a moderate hill climber, and our list ends with a couple of
boards that shy away from inclines, the Carvon EVO and Inboard M1.
It
should be noted that with the optional 38T drive gear and hard
duro/small wheels, the Evolve GT/GTX line-up are also capable of
climbing hills on par with (even better than?) the Metroboard Stealth
Dual and Enertion Raptor 2. Video here. However, the 38T drive gear is not standard.
Price
Note: The Mellow Board pricing was taken straight from mellowboards.com and converted from EUR to USD. After publication I was made aware of mellowboardusa.com,
where adjusted pricing can be found direct from the US distributor. The
difference being that shipping a drive unit from Europe would have a
considerable shipping fee attached to it. It’s clear this cost (and
other sundry costs) has been incorporated into the US distributor price
of $1,995. Please make your own adjustments and determinations regarding
this as you read the rest of the article.
In the Sub-$1500 category we have the Inboard M1 and
Evolve Bamboo GT. In the $1500-$1800 category we have the Metroboard
single, Mellow Board, Evolve GTX, Boosted Board and Enertion Raptor 2.
In the $1800 and above category we have the Metroboard Dual, Carvon EVO
and Evolve Carbon GT (man, carbon fiber is expensive!)
And The Winner Is…
The
equal winners of this little test couldn’t be more different! According
to just raw specs vs. price, the best electric skateboard of 2017 is a
tie between the Evolve Bamboo GT and the Metroboard 41" Slim Stealth Edition (single)!
On
paper the Evolve Bamboo GT represents well-rounded specs at a
reasonable price. In addition, Evolve also have that tempting 2-in-1
conversion capability, allowing you to fit pneumatic all-terrain tyres
to your board making it an entirely different beast!
If
you can forgo the need for pneumatic all-terrain tyres, I believe the
Metroboard single to be a far better option. Top speed between the two
is splitting hairs, they both climb the same grade hills, but the
Metroboard has insane range! Spend approx $200 more to get the
Metroboard single over the Bamboo GT and you instantly upgrade from a 19
mile range board to a 40 mile range board! Again, that’s insane!
The
next issue to tackle is one of aesthetics vs. quality. The Evolve looks
better, there’s no denying it. It has nice flex, dual kingpin trucks
(if that’s your thing) and is just an all-round slimmer and sexier
design. The Metroboard is not as slim and stealth as its namesake. It
rides high and stiff compared to an Evolve. When it comes to the
argument of quality, however, the opposite is true. Evolve’s quality and
reliability has been called into question time and time again, whereas
Metroboard’s are known as bullet proof tanks! Then there’s the question
of batteries. Paper specs tell us the Bamboo GT has a 19 mile range, but
due to the low quality cells Evolve use in their battery packs, Evolve
boards generally suffer from the worst battery sag in the industry. I
think it would be fair to say that the Bamboo GT actually gets about 14
miles of enjoyable/manageable range, which now really tilts the scales
in favor of the Metroboard single.
My Thoughts on the Results
If you had to call a winner out of the two tied boards, it would have to be the Metroboard 41" Slim Stealth Edition (single). For speed, range and hill climbing vs. dollar + quality and reliability, it just can’t be beat!
Of
course, however, there will be people who don’t need 40 miles worth of
range and would much prefer to have the option for pneumatic all-terrain
tyres, save $200 and get the Bamboo GT. There will also be people who
just plain don’t like the look/feel of something like the Metroboard.
One
of the most interesting results for me was the gap between the Evolve
GTX and Carbon GT. These are essentially the exact same board — they
have the same top speed, range and hill climbing capability. The
difference is purely cost. That carbon fiber deck must cost a pretty
penny! The GTX comes in at $1728.99, whereas the Carbon GT comes in at
$2069.98 (which also includes a set of ABEC11 97mm Flywheels, otherwise
the board wont reach the quoted top speed — matching the GTX). That’s an
insane cost difference for exactly the same performance between the two
boards. I personally view the GTX as the preferable choice here. It’s
not only cheaper, but it’s more flexy and more modular, as the deck and
enclosure are separate pieces, allowing for more modifications down the
road (on the Carbon GT the deck and the enclosure are one complete
unit). On the other hand, the Carbon GT is longer (40 inches compared to
the GTX’s 38), lighter (17lbs compared to the GTX’s 19.4 lbs) and
obviously has a far more rigid and stiff feel to it. Some people prefer
the latter points.
I
guess we also can’t ignore the fact that these paper-based results sees
the Boosted Board languishing in last place. The board scores extremely
poorly in the speed and range departments. The KO then comes from the
high price tag that’s applied to what is now considered a fairly
mediocre spec sheet. But (and it’s a big but) SPECS AREN’T EVERYTHING…
Boosted
remains the smoothest and most comfortable electric skateboard I’ve
ever ridden! A tremendous amount of care and attention to detail is put
into their product. Their remote and mobile app are still best in class
and their QC and customer service also, arguably, remains unmatched.
Yes, there are far better performing electric skateboards you can get
for your money, but very few do the “off board” stuff as well as
Boosted, very few have such a well-rounded, well-finished, polished and
respected product that “just works” as Boosted do. That’s what you pay for.
What
these results say in the end is that user experience counts for far
more than specs ever will. The problem is that user experience is a very
hard thing to measure, particularly form an independent, third party
perspective.
Or is it?…
The Peoples Choice
This
brings us back full circle to Jay Boston’s Electric Skateboard Awards
and the overall winner as voted by 1,387 people — the Enertion Raptor 2!
The
Raptor 2 comes forth in a straight-up specs showdown, but it’s arguable
that the Evolve GT Bamboo is only above it due to its price point. In
addition, I’d be surprised if there were any more than five Metroboards
in the whole of Australia! Add to that Evolve’s known reliability and
durability woes and it’s easy to see why the Enertion Raptor 2 came out
on top!
The
Enertion Raptor 2 is faster than the Evolve suite of boards, is
comparable in range to the GTX and Carbon GT (once you account for the
Evolve sag factor) and is an equal or better hill climber in stock
configuration. It sits around the same price point as an Evolve GTX,
which is also obviously significantly cheaper than a Carbon GT.
If
you’re after a performance board packing the latest in motor, battery
and VESC/FOCBOX technology that has great specs across the board at a
highly competitive price, in my mind, the people got it right!
The Best Electric Skateboard of 2017?
In
the end that’s completely up to you to decide. It’s completely
subjective. What’s best for one might not be what’s best for another.
If
the best electric skateboard for 2017 to you is simply the fastest
electric skateboard, then the best electric skateboard of 2017 is the
Carvon EVO V4 Dual.
If
the best electric skateboard for 2017 to you is simply the electric
skateboard with the most range, then the best electric skateboard of
2017 is the Metroboard 41" Slim Stealth Edition (single).
If
the best electric skateboard for 2017 to you is simply the electric
skateboard with the best hill climbing capabilities, then the best
electric skateboard of 2017 is the Enertion Raptor 2 or Metroboard 41"
Stealth Dual.
If
the best electric skateboard for 2017 to you is simply the most
reliable/durable electric skateboard, then the best electric skateboard
of 2017 is the Boosted Board Gen2 Dual+ or maybe one of the Metroboards.
If
the best electric skateboard for 2017 to you is simply the most
versatile electric skateboard, then the best electric skateboard of 2017
is an Evolve GT/GTX.
I
honestly do think the people got it right in selecting the Enertion
Raptor 2 as the best all round electric skateboard of 2017, but I also
think an honorable mention needs to go to the Metroboard 41" Slim
Stealth Edition (single) from a pure specs for dollar + quality
point-of-view.
It truly is an exciting time to be into electric skateboards!
I’ve
been building my smart home over the last few years and was in the
market to add sensors everywhere in an effort to improve the automations
that I was able to achieve.
I
previously had a couple of Philips Hue Motion sensors, and Elgato Eve
Door & Window sensors, but at £35 a piece, adding these to all rooms
and door would get very expensive. I was introduced to the Xiaomi
ecosystem and decided to give it a try. Interestingly this is the first
time that I’ve opted to buy some non native devices and rely on
Homebridge for the integration. Prior to this, I’ve used HomeBridge as a
way to integrate tech that I already owned.
Purchase
I
got all of my kit from a site called Lightinthebox.com. This was the
only site that I found that shipped to the UK and had a wide range
stocked. I initially opted for:
One
thing to note is that the website quoted 5–8 days for shipping — this
was actually more like 19, but for the price I can’t really complain.
Setup
The
setup was fairly trivial. I did however need to upgrade my version of
node running on my RPi3 to work with the plugin. As to not waste
countless hours in node dependency hell, I’d recommend a fresh install
of everything. I took a copy of my config.json file, made a note of
installed plugins and completely wiped my SD card.
Follow these steps to get going (this assumes you’re on an iPhone, running iOS 11 or later)
Download the MiHome app and setup the gateway and configure your accessories. It doesn’t really matter what rooms the devices are placed in.
Open the MiHome app, tap on the gateway, then tap on the 3 dots in the top right corner.
Select about and then repeatedly (and quickly) tap on the blank space until three additional menu options in Chinese appear.
Tap
the second option. This allows you to turn on local access mode. A
password should appear. Make a note as you’ll need that soon.
Tap
back and select the 3rd option. Make a note of the MAC address of the
gateway. There’s a couple listed, one of the router that the gateway is
connected to and one for the gateway itself. If it’s unclear which is
which, try both. (If you run homebridge with the -D flag, you’ll get
debug info which will let you know if you’ve connected to the gateway
correctly).
Install the homebridge-mi-aqara plugin and input the MAC and password from the steps above into your config.json file.
Restart HomeBridge and your accessories should now appear.
Usage
The
first thing to note is how tiny the door sensors are. Here’s an image
with the Elgato Eve as a comparison. Due to the size of the Eve device
and the trim around my doors, I’ve had to be creative with how I mount
it.
The
second thing to note is how quickly these sensors update within
HomeKit — unscientifically I’d say this is instant. Even with the latest
firmware the Elgato sensors still have a slight delay if that haven’t
been triggered for a period of time. This still makes them unsuitable
for certain automations, where you need a light to turn on immediately
for example.
The
door sensors show up as regular sensors, along with three other
accessories from the gateway; a light sensor, multi colour light and a
switch. The light actually makes a pretty decent nightlight, especially
as you don’t need to physically connect it to a router.
I’ve
got a couple of automations setup where I use a door sensor in
combination with a motion sensor to detect if somebody is entering or
leaving a room. To do this I have a motion sensor on each side of the
door and then use the motion as a conditional rule. For example, I want
to turn on a table lamp in my daughters room when the door opens, but
only between 5am and 8am. This assumes that I’m going in to her room
when she is awake and that I want the light to come on with a soft glow.
It also assumes that if I’m already in the room and leave during that
window that I don’t want to like to come on (if for example she actually
isn’t awake, or she settles back to sleep). To do this, I have a motion
sensor on the landing and in her room (via a D Link Omna camera) with a
rule stating that the lamp should only come on when there is motion
detected on the landing. If there is motion on the landing then I must
be outside of the room, therefore entering. If there’s motion in the
room, then I’m leaving so the rule doesn’t trigger again.
To
get the extra option I used the Elgato Eve app. Firstly setup the basic
automation rules in the Apple Home app, and then add the condition
using Eve.
So
far, I’m really impressed with the Xiaomi system and would certainly
consider adding more devices (although you can only add 30 per gateway)
to my setup.
Hardik Gandhi is Master of Computer science,blogger,developer,SEO provider,Motivator and writes a Gujarati and Programming books and Advicer of career and all type of guidance.