Programming & IT Tricks . Theme images by MichaelJay. Powered by Blogger.

Copyright

Facebook

Post Top Ad

Search This Blog

Post Top Ad

Responsive Ads Here

Archive

Post Top Ad

Contact


Editors Picks

Follow us

Post Top Ad

Fashion

Music

News

Sports

Food

Technology

Featured

Videos

Fashion

Technology

Fashion

Label

Translate

About

Translate

Sponsor

test

Weekly

Comments

Recent

Connect With us

Over 600,000+ Readers Get fresh content from FastBlog

About

Showing posts with label Startup. Show all posts
Showing posts with label Startup. Show all posts

Thursday, February 1, 2018

The Apple Battery Cover-Up: Triumph of Management, Failure of Leadership


This is a difficult post for me to write. It’s a post about Apple — yet it’s not the same Apple where I spent 22 years of my career. It’s also a post about competent management — and, the utter failure of leadership.starti

You’ve probably seen the headlines by now. Apple recently rolled out an update that slows down older phones, ostensibly in an effort to preserve the life of aging batteries.
The thing is, Apple didn’t tell anyone that this was happening; a lot of iPhone users upgraded to newer models, when they could have simply bought new batteries — a much smaller financial investment — and continued to use their old phones.
It’s been a public relations nightmare, with multiple class action suits already filed. And Apple’s solution to the problem has been to apologize — rather feebly, and only after the whole thing was uncovered by a Reddit user — and knock down the battery replacement cost to $29. (It normally runs about $79.)

This is unbelievable to me.

When I was at Apple in the early 2000s, I ran into a somewhat similar problem, albeit on a much smaller scale. About 800 iBooks (yes there was actual hardware called an iBook), all of them in university settings, started exhibiting problems with their CD trays.

We acted quickly, and replaced every single one of those 800 units, no questions asked.

I know for a fact that we lost a couple of customers to Microsoft over this. I also know that we did the right thing. We were proud to have done the right thing. And most of our customers appreciated it.
Even with this slight inconvenience, they felt good about how we were treating them. Our response to the hardware malfunction enhanced our brand and our reputation.

Again: The Apple you’re reading about today is not the same company I worked for all those 22 years.

I can think of so many better ways they could have handled this:

1. The best solution would have been to just be upfront with customers in the first place. Say, “Hey, we’re glad you enjoy your old-school iPhone, but you’re going to be left behind; in order to download the latest iOS updates, you need to upgrade to a newer device.”
This kind of thing is, of course, totally normal in the tech world; you can’t run the latest macOS on an older MacBook any more than you can run the latest version of Windows on a 1980s PC. Tech changes, and eventually goes obsolete.
2. Another solution? In response to the aging battery issue, offer a coupon to those old-school iPhone users, giving them 50 percent off an iPhone 8. This is a feel-good solution — a new phone for a fraction of the price! Plus, it gets people into the Apple Store, and makes them actually happy.
3. Apple could even have offered to replace those old batteries in the store, free of charge — an inconvenient and cumbersome solution, but at least it would have shown some real customer service initiative. And again, it would generate traffic to the Apple Store and an opportunity to upgrade. Has everyone forgotten about the traffic conversion factor?
Any of those solutions would have been preferable to Apple’s secretive software upgrade — which, again, we only know about through social media users, not because Apple was forthcoming about it — to say nothing of its lame apology and its trifling $29 battery offer.
Here I might note that, according to some of my sources on the inside, the actual cost of a battery is in the single digits — so the fact that Apple is still making people pay $29 for a new one, in the face of a major PR scandal and with $200 billion in the reserves, is absolutely stunning.

Sure: In the short term, Apple’s saving a few bucks. That’s because the company is managing this problem well.

Managing a problem means getting through it with minimum trouble to the company. It involves a focus on numbers and accounting, but a short-sightedness when it comes to relationships and customer goodwill.
Instead of managing the problem, Apple should be leading it — not doing the bare minimum to save its neck, but doing the right thing, taking pride in doing the right thing, and trusting that customers will appreciate it. That’s what leadership means.
In other words, Apple should be thinking a few steps ahead, and realizing that a few bucks for free battery replacements (or discounted iPhone upgrades) mean nothing compared to the loss of goodwill the company now faces.

Goodwill (or relationships, when you get right down to it) is the most precious commodity it or any other company has. And Apple is squandering it.

And that’s to say nothing of the lack of communication here — as if Apple’s executives don’t know the old political adage, that the cover-up is always worse than the deed.
This whole episode may be seen as a turning point for Apple — its real transition from Steve’s company into Tim’s. Tim Cook is a great manager, and he’s certainly managing this situation ably.

But Steve would have done something better: He would have shown leadership.

Monday, January 29, 2018

Why HubSpot’s Building a Centralized Platform



In one year, HubSpot doubled the number of certified partners in its platform ecosystem and increased the number of apps installed by customers by 142% — here’s why that matters.
We’re living in the golden age of marketing and sales technology. There are more than 5,000 marketing and sales technology vendors globally, all striving to help businesses to better find and delight customers in a digital world.
As a result, there’s no lack of cool and exciting software in this space. If you can imagine just about any creative new capability you’d like for engaging with your customers, there’s probably a martech startup out there somewhere building it.
The challenge, however, is figuring out how to get all these different tools to work well together — without needing a crack team of IT engineers to take months wiring them up. As Dr. McCoy from Star Trek might have protested, “Damn it, Jim, I’m a marketer, not a systems architect.”
This is the challenge that a centralized platform can solve.
What exactly makes a SaaS solution a “platform” instead of simply being a product?
Almost every SaaS product today has APIs that let it exchange data with other applications. A platform, however, plays a more active role in coordinating how multiple products work together. You can picture a platform as a hub, with spokes connecting other products to its center. The hub binds those disparate products together and orchestrates them in a common mission.
A platform creates a stable center of gravity in your marketing and sales stack by delivering three main benefits through a centralized:
1. Data Model. A platform does more than just exchange data with other apps in your stack. It establishes an organizing model for that data — for instance, a common identity and record structure for a lead, a customer, a deal, etc. It maps data from all the other apps connected to it into those common record formats, enforcing a baseline level of data quality. That centralized and well-structured database then serves as a shared “source of truth” for the platform and any other app that wants to tap into it.
2.Workflow and User Experience. Research has found that marketers and salespeople can lose a lot of time switching between different applications. A platform reduces that overhead by establishing a centralized “home base” where most users can do the majority of their work. In addition to providing a common view of shared data across apps, it also becomes the center of their workflow for most activities — especially if apps embed key features directly into platform’s user interface. Individual users might still log into other apps for more specialized tasks, but there’s much less day-to-day app switching across your organization.
3.Certification Authority. When you integrate apps on your own, you must take full responsibility for making sure that everything plays well together. A platform lifts some of that burden off your shoulders by establishing a trusted certification process for apps in its ecosystem. Certified apps will integrate smoothly, and you’re assured that they’ve been reviewed for a certain level of compatibility. A helpful directory of all certified apps maintained by the platform company can also make it easier to find the right app to add whenever a particular need arises.
All of these factors help lower the organizational costs of adopting multiple products in your marketing and sales stack, by reducing friction in their selection, installation, and use.

The Growth Dynamics of Platform Ecosystems

To get a sense of how well a platform is doing at delivering those benefits, you can look at two key indicators of ecosystem health through growth of:
1.The number of apps installed by customers. If more platform customers are installing more certified apps, that’s one of the strongest signals that there’s real value in the ecosystem for them. If installing or using apps is difficult — or ultimately doesn’t achieve results — this metric stalls.
2.The number of certified apps. Quality matters more than quantity when it comes to a platform ecosystem. An app directory filled with a bunch of low-quality apps creates more confusion than clarity. But if the number of high-quality certified apps is growing, it’s a good sign that the platform dynamics are working for app developers too. A platform that makes it easier for businesses to successfully adopt more apps naturally attracts more developers.
By both of these measures, the HubSpot platform had a good year in 2017.
The number of apps installed by HubSpot customers on our platform increased 142%, and the number of certified apps in our Connect partner ecosystem grew by 108%.
The graphic at the top of this post illustrates what our platform ecosystem looks like here at the start of 2018. You can also browse our updated integrations directory to learn more about all the different capabilities these app developers have to offer. We’re anticipating further expansion in the year ahead.
While we still have much work to do — we aspire to build a truly lovable platform, and we hold that as a very high bar — we’re excited about the growing momentum in our ecosystem. But most of all, we’re delighted to see our customers getting measurable benefits from our platform by effectively integrating more specialized capabilities into their marketing and sales stacks.
That’s what really matters.

Great technology. Shit service. That’s our reputation.


In one of my earliest roles at a B2B startup, there were so many fires in a day, that if no “emergencies” occurred for even a couple of hours, I sensed something was wrong immediately. It got to the point where I knew hundreds of clients by name because of how frequently I needed to do damage control.
Meanwhile, new products, new features, new services, were continuously released as we aimed to stay on the ‘cutting edge’ of technology. With limited resources and a mission to stay innovative, low impact bugs and minority clients were deemed low priority. I watched clients cancel and support staff burn out.
Internally, the “importance of customer service excellence” was reiterated time and time again through every possible means — email, chat, message boards, meetings, handbooks, training workshops, etc. Pull aside any employee at random and they could mindlessly regurgitate that it was one of the company core values. In reality, we missed the mark by a long shot.
“Great technology. Shit service. That’s our reputation.”
Related image
if this got a chuckle out of you, then you probably know what I’m talking about

Where was the disconnect? The answer isn’t black and white, but I saw two areas contributing the most to this issue.
  1. Not all clients were treated equal. With the mentality of move fast and break things, beta users or clients who contributed to advancing the product/software were implicitly given priority. This is not necessarily a bad thing if there was load balancing to ensure sufficient support for the majority of the client base — paying customers with expectations.
  2. Innovation was prioritized over maintenance. Yes, complacency is dangerous and it is important to grow, to scale. But at what expense? With stretched resources, it can be easy to neglect seemingly ‘low impact’ bugs and glitches. The result? A team of support staff unequipped to provide long-term solutions to recurring issues for clients that reach out again, and again, and again.

Let’s break this down.

When a startup makes the transition from early stage (looking for market validation), into a growth stage, there is no longer the luxury of only dealing with ‘Innovators’ and ‘Early Adopters.’
This. Is. Not. A. Bad. Thing.
Great technology that is lucky enough to have reached ‘product market fit’ serves a need, fills a gap, or solves a problem. Here’s the thing. Clients onboarding at this point — the ‘Early Majority’ — have an inherent expectation that they can reliably use the product. There is a lower tolerance for inconsistency, errors, glitches.
Most, if not all, are not willing guinea pigs supporting your grander, ultimate vision. They do not care about that. They did not sign up for that. They want the tool they paid for to work. They want it to work, the way it’s designed to, when it’s supposed to, so they can go about their day running their own businesses.
Related image

What am I saying?

The crux of it is this. There comes a point when innovation can wait. The point where the difference between success and failure is execution. Not the idea. Not intelligence. Consistent execution.
I get the sense that many startups thrive on the concept of organized chaos and inherently reject structure. Perhaps it’s a cultural thing. Perhaps some startups remain functional on this model. However, organized chaos is still chaos. And I, for one, can not imagine operational efficiency being optimal on a model of chaos.
There comes a time for structure, which does not have to equal rigidity. But it needs to create stability. While this will mean something different for every company, there are some general things. I’m talking about standard operating procedures. Enforcing internal processes (e.g. clients are not QA, production environments are not meant for testing… test the code!). And please, documentation can no longer be optional.
Stability is just as important as scalability. Hate to say it, but scaling on an unstable foundation is stupidity. Especially when hubris allows a company to believe they can get away with it.
The company I referred to at the start of the article was a SaaS startup utilizing a subscription model. While it was vital to retain all our customers, the cost of switching platforms was often too high and there weren’t comparable programs on the market. As a result, the team calibrated to the errors and took our clients’ tolerance for granted.
There is nothing more detrimental to a business than falling into the trap of believing their technology is great enough to outweigh good service.

Tuesday, January 23, 2018

AI Innovation: Security and Privacy Challenges


To anyone working in technology (or, really, anyone on the Internet), the term “AI” is everywhere. Artificial intelligence — technically, machine learning — is finding application in virtually every industry on the planet, from medicine and finance to entertainment and law enforcement. As the Internet of Things (IoT) continues to expand, and the potential for blockchain becomes more widely realized, ML growth will occur through these areas as well.
While current technical constraints limit these models from reaching “general intelligence” capability, organizations continue to push the bounds of ML’s domain-specific applications, such as image recognition and natural language processing. Modern computing power (GPUs in particular) has contributed greatly to these recent developments — which is why it’s also worth noting that quantum computing will exponentialize this progress over the next several years.
Alongside enormous growth in this space, however, has been increased criticism; from conflating AI with machine learning to relying on those very buzzwords to attract large investments, many “innovators” in this space have drawn criticism from technologists as to the legitimacy of their contributions. Thankfully, there’s plenty of room — and, by extension, overlooked profit — for innovation with ML’s security and privacy challenges.

Reverse-Engineering

Machine learning models, much like any piece of software, are prone to theft and subsequent reverse-engineering. In late 2016, researchers at Cornell Tech, the Swiss Institute EPFL, and the University of North Carolina reverse-engineered a sophisticated Amazon AI by analyzing its responses to only a few thousand queries; their clone replicated the original model’s output with nearly perfect accuracy. The process is not difficult to execute, and once completed, hackers will have effectively “copied” the entire machine learning algorithm — which its creators presumably spent generously to develop.
The risk this poses will only continue to grow. In addition to the potentially massive financial costs of intellectual property theft, this vulnerability also poses threats to national security — especially as governments pour billions of dollars into autonomous weapon research.
While some researchers have suggested that increased model complexity is the best solution, there hasn’t been nearly enough open work done in this space; it’s a critical (albeit underpublicized) opportunity for innovation — all in defense of the multi-billion-dollar AI sector.

Adversarial “Injection”

Machine learning also faces the risk of adversarial “injection” — sending malicious data that disrupts a neural network’s functionality. Last year, for instance, researchers from four top universities confused image recognition systems by adding small stickers onto a photo, through what they termed Robust Physical Perturbation (RP2) attacks; the networks in question then misclassified the image. Another team at NYU showed a similar attack against a facial recognition system, which would allow a suspect individual to easily escape detection.
Not only is this attack a threat to the network itself (i.e. consider this against a self-driving car), but it’s also a threat to companies who outsource their AI development and risk contractors putting their own “backdoors” into the system. Jaime Blasco, Chief Scientist at security company AlienVault, points out that this risk will only increase as the world depends more and more on machine learning. What would happen, for instance, if these flaws persisted in military systems? Law enforcement cameras? Surgical robots?

Training Data Privacy

Protecting the training data put into machine learning models is yet another area that needs innovation. Currently, hackers can reverse-engineer user data out of machine learning models with relative ease. Since the bulk of a model’s training data is often personally identifiable information —e.g. with medicine and finance — this means anyone from an organized crime group to a business competitor can reap economic reward from such attacks.
As machine learning models move to the cloud (i.e. self-driving cars), this becomes even more complicated; at the same that users need to privately and securely send their data to the central network, the network needs to make sure it can trust the user’s data (so tokenizing the data via hashing, for instance, isn’t necessarily an option). We can once again abstract this challenge with everything from mobile phones to weapons systems.
Further, as organizations seek personal data for ML research, their clients might want to contribute to the work (e.g. improving cancer detection) without compromising their privacy (e.g. providing an excess of PII that just sits in a database). These two interests currently seem at odds — but they also aren’t receiving much focus, so we shouldn’t see this opposition as inherent. Smart redesign could easily mitigate these problems.

Conclusion

In short: it’s time some innovators in the AI space focused on its security and privacy issues. With the world increasingly dependent on these algorithms, there’s simply too much at stake — including a lot of money for those who address these challenges.

Sunday, January 21, 2018

How you can build your own VR headset for $100


My name is Maxime Coutté. I’m 16 and I built my own VR headset with my best friends, Jonas Ceccon and Gabriel Combe. And it ended up costing us $100.
I started programming when I was 13, thanks to my math teacher. Every Monday and Tuesday, my friends and I used to go to his classroom to learn and practice instead of having a meal at the cafeteria.
I spent one year building a very basic 8-bit OS from scratch and competing in robotics contests with my friends.
I then got interested in VR and with my friends we agreed that it would be really cool to create our own world in VR where we could spend time after school. But facing the fact that an Oculus was $700 at the time, we decided to build our own headset.
3D printed parts of the headset

Making VR accessible to everyone?

DARROW; J. R. EYERMAN/THE LIFE PICTURE COLLECTION/GETTY IMAGES
It was because of an anime called Sword Art Online where the main character is in a virtual reality RPG that I fell in love with VR. I wanted to understand every aspect of it.
I bought the cheapest components I could and we started by learning the very basics of the physics and math behind VR (proper acceleration, antiderivatives, quaternions…). And then we re-invented VR. I wrote WRMHL, and then FastVR with Gabriel. Putting all of this together, we ended up with a $100 VR headset.

A fully hackable VR headset and development kit

To speed up VR development time, we built FastVR, an open-source SDK for developers that is easy to understand and customize. It works like this:
  • The core headset computes the position of the headset in space;
  • The position is sent from the headset to WRMHL, and part of the CPU’s power is dedicated to reading those messages;
  • Then FastVR retrieves the data and uses them to render the VR game.
Everything you need to build the headset has been open-sourced and can be hacked.

Why open source?

I want to make VR mainstream. So I reached out to Oussama Ammar, one of the co-founders at The Family. I talked to him about setting up a company and launching a Kickstarter.
But he convinced me that for now, it’s better to wait on starting a business, to keep meeting others who have the same goals, and to keep learning.
We took a trip to Silicon Valley and Oussama introduced me to the chief architect at Oculus, Atman Brinstock. And they gave me some precious advice: make all of this open source.

The Next Step?

There are still a lot of technical points that we want to improve.
Our big focus right now is on a standalone VR headset, which we already have as a simple version, and cheaper 3D tracking.
All of this will be released soon.

How do I get started?

If you want to learn more about the technical side and build your headset, just follow the guide by clicking here. Star the repo if you liked it ⭐️

Thursday, January 18, 2018

Making your own Face Recognition System


Face recognition is the latest trend when it comes to user authentication. Apple recently launched their new iPhone X which uses Face ID to authenticate users. OnePlus 5 is getting the Face Unlock feature from theOnePlus 5T soon. And Baidu is using face recognition instead of ID cards to allow their employees to enter their offices. These applications may seem like magic to a lot of people. But in this article we aim to demystify the subject by teaching you how to make your own simplified version of a face recognition system in Python.

Background

Before we get into the details of the implementation I want to discuss the details of FaceNet. Which is the network we will be using in our system.

FaceNet

FaceNet is a neural network that learns a mapping from face images to a compact Euclidean space where distances correspond to a measure of face similarity. That is to say, the more similar two face images are the lesser the distance between them.

Triplet Loss

FaceNet uses a distinct loss method called Triplet Loss to calculate loss. Triplet Loss minimises the distance between an anchor and a positive, images that contain same identity, and maximises the distance between the anchor and a negative, images that contain different identities.
Figure 1: The Triplet Loss equation
  • f(a) refers to the output encoding of the anchor
  • f(p) refers to the output encoding of the positive
  • f(n) refers to the output encoding of the negative
  • alpha is a constant used to make sure that the network does not try to optimise towards f(a) - f(p) = f(a) - f(n) = 0.
  • […]+ is equal to max(0, sum)

Siamese Networks

Figure 2: An example of a Siamese network that uses images of faces as input and outputs a 128 number encoding of the image. Source: Coursera
FaceNet is a Siamese Network. A Siamese Network is a type of neural network architecture that learns how to differentiate between two inputs. This allows them to learn which images are similar and which are not. These images could be contain faces.
Siamese networks consist of two identical neural networks, each with the same exact weights. First, each network take one of the two input images as input. Then, the outputs of the last layers of each network are sent to a function that determines whether the images contain the same identity.
In FaceNet, this is done by calculating the distance between the two outputs.

Implementation

Now that we have clarified the theory, we can jump straight into the implementation.
In our implementation we’re going to be using Keras and Tensorflow. Additionally, we’re using two utility files that we got from deeplearning.ai’s repo to abstract all interactions with the FaceNet network.:
  • fr_utils.py contains functions to feed images to the network and getting the encoding of images
  • inception_blocks_v2.py contains functions to prepare and compile the FaceNet network

Compiling the FaceNet network

The first thing we have to do is compile the FaceNet network so that we can use it for our face recognition system.
import os
import glob
import numpy as np
import cv2
import tensorflow as tf
from fr_utils import *
from inception_blocks_v2 import *
from keras import backend as K
K.set_image_data_format('channels_first')
FRmodel = faceRecoModel(input_shape=(3, 96, 96))
def triplet_loss(y_true, y_pred, alpha = 0.3):
    anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2]

    pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor,
               positive)), axis=-1)
    neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, 
               negative)), axis=-1)
    basic_loss = tf.add(tf.subtract(pos_dist, neg_dist), alpha)
    loss = tf.reduce_sum(tf.maximum(basic_loss, 0.0))
   
    return loss
FRmodel.compile(optimizer = 'adam', loss = triplet_loss, metrics = ['accuracy'])
load_weights_from_FaceNet(FRmodel)
We’ll start by initialising our network with an input shape of (3, 96, 96). That means that the Red-Green-Blue (RGB) channels are the first dimension of the image volume fed to the network. And that all images that are fed to the network must be 96x96 pixel images.
Next we’ll define the Triplet Loss function. The function in the code snippet above follows the definition of the Triplet Loss equation that we defined in the previous section.
If you are unfamiliar with any of the Tensorflow functions used to perform the calculation, I’d recommend reading the documentation (for which I have added links to for each function) as it will improve your understanding of the code. But comparing the function to the equation in Figure 1 should be enough.
Once we have our loss function, we can compile our face recognition model using Keras. And we’ll use the Adam optimizer to minimise the loss calculated by the Triplet Loss function.

Preparing a Database

Now that we have compiled FaceNet, we are going to prepare a database of individuals we want our system to recognise. We are going to use all the images contained in our images directory for our database of individuals.
NOTE: We are only going to use one image of each individual in our implementation. The reason is that the FaceNet network is powerful enough to only need one image of an individual to recognise them!
def prepare_database():
    database = {}
    for file in glob.glob("images/*"):
        identity = os.path.splitext(os.path.basename(file))[0]
        database[identity] = img_path_to_encoding(file, FRmodel)
    return database
For each image, we will convert the image data to an encoding of 128 float numbers. We do this by calling the function img_path_to_encoding. The function takes in a path to an image and feeds the image to our face recognition network. Then, it returns the output from the network, which happens to be the encoding of the image.
Once we have added the encoding for each image to our database, our system can finally start recognising individuals!

Recognising a Face

As discussed in the Background section, FaceNet is trained to minimise the distance between images of the same individual and maximise the distance between images of different individuals. Our implementation uses this information to determine which individual the new image fed to our system is most likely to be.
def who_is_it(image, database, model):
    encoding = img_to_encoding(image, model)
    
    min_dist = 100
    identity = None
    
    # Loop over the database dictionary's names and encodings.
    for (name, db_enc) in database.items():
        dist = np.linalg.norm(db_enc - encoding)
        print('distance for %s is %s' %(name, dist))
        if dist < min_dist:
            min_dist = dist
            identity = name
    
    if min_dist > 0.52:
        return None
    else:
        return identity
The function above feeds the new image into a utility function called img_to_encoding. The function processes an image using FaceNet and returns the encoding of the image. Now that we have the encoding we can find the individual that the image most likely belongs to.
To find the individual, we go through our database and calculate the distance between our new image and each individual in the database. The individual with the lowest distance to the new image is then chosen as the most likely candidate.
Finally, we must determine whether the candidate image and the new image contain the same person or not. Since by the end of our loop we have only determined the most likely individual. This is where the following code snippet comes into play.
if min_dist > 0.52:
    return None
else:
    return identity
  • If the distance is above 0.52, then we determine that the individual in the new image does not exist in our database.
  • But, if the distance is equal to or below 0.52, then we determine they are the same individual!
Now the tricky part here is that the value 0.52 was achieved through trial-and-error on my behalf for my specific dataset. The best value might be much lower or slightly higher and it will depend on your implementation and data. I recommend trying out different values and see what fits your system best!

Building a System using Face Recognition

Now that we know the details on how we recognise a person using a face recognition algorithm, we can start having some fun with it.
In the Github repository I linked to at the beginning of this article is a demo that uses a laptop’s webcam to feed video frames to our face recognition algorithm. Once the algorithm recognises an individual in the frame, the demo plays an audio message that welcomes the user using the name of their image in the database. Figure 3 shows an example of the demo in action.
Figure 3: An image captured at the exact moment when the network recognised the individual in the image. The name of the image in the database was “skuli.jpg” so the audio message played was “Welcome skuli, have a nice day!”

Conclusion

By now you should be familiar with how face recognition systems work and how to make your own simplified face recognition system using a pre-trained version of the FaceNet network in python!
If you want to play around with the demonstration in the Github repository and add images of people you know then go ahead and fork the repository.
Have some fun with the demonstration and impress all your friends with your awesome knowledge of face recognition!

Sunday, January 7, 2018

Stop texting and driving, through empathy.


Everyday I commute to work or drive I notice the amount of drivers that still text and drive. Observing drivers next to me as I pass them by while they go below the speed limit to “stay safe,” or seeing the person in front looking down every two seconds from their side view mirror, It’s alarming.
Who or what so important to risk lives for over a text message? Boss? Significant other? Do you think their recipient would continue the conversation knowing they were on the road? Probably not.
If you’ve ever texted and drove while with friends, did they tell you to stop? I hope they did, or you need new friends; just kidding, maybe…

Simple Implementation

If a sender is going at a speed faster than 10 miles per hour, the recipient would see a message displaying the speed below. (See image)

Privacy

Is it a privacy concern to allow recipients of messages to see you’re driving? I think not. Don’t want them to know? Then don’t text them!
With the new iOS driving feature that auto texts back, perhaps the speed doesn’t need to display since it was an auto generated text message.

Disable / Passenger?

If a passenger is texting, well…it will also show your speed, however it could be possible to implement a feature that only shows the speed once. Another option would be if a user types along the lines of “no I am not driving” it stops displaying speed.
Using the honor system and the thought that someone wouldn’t lie to a loved one about texting and driving, this feature would be quite effective to help stop people from texting while driving.

Empathy through others.

Imagine you’re texting your significant other, parents, or siblings, someone you care about. The speed is displayed so they ask if you’re driving — are you really going to lie to continue the conversation? I’d hope you stop, or your recipient stops responding.
It’s a very simple implementation that I feel through empathy of people that care about you, would cause offenders to stop. If it’s really important call them hands free!
What are your thoughts? Be sure to follow me as I do a larger case study on this idea that rewards drivers for not texting and driving!

Saturday, January 6, 2018

What we learned about productivity from analyzing 225 million hours of working time in 2017


This post was originally published on the RescueTime blog. Check us out for more like it.
When exactly are we the most productive?
Thinking back on your last year, you probably have no idea. Days blend together. Months fly by. And another year turns over without any real understanding of how we actually spent our time.
Our mission at RescueTime has always been to help you do more meaningful work. And this starts with understanding how you spend your days, when you’re most productive, and what’s getting in your way.
In 2017, we logged over 225 million hours of digital time from hundreds of thousands of RescueTime users around the world.
By studying the anonymized data of how people spent their time on their computers and phones over the past 12 months, we’ve pinpointed exactly what days and times we do the most productive work, how often we’re getting distracted by emails or social media, and how much time a week we actually have to do meaningful work.
Key Takeaways:

What was the most (and least) productive day of 2017?

Simply put, our data shows that people were the most productive on November 14th. In fact, that entire week ranked as the most productive of the year.
Which makes sense. With American Thanksgiving the next week and the mad holiday rush shortly after, mid-November is a great time for people to cram in a few extra work hours and get caught up before gorging on Turkey dinner.
On the other side of the spectrum, we didn’t get a good start to the year. January 6th — the first Friday of the year — was the least productive day of 2017.

Now, what do we mean when we talk about the “most” or “least” productive days?

RescueTime is a tool that tracks how you spend your time on your computer and phone and let’s you categorize activities on a scale from very distracting to very productive. So for example, if you’re a writer, time spent in Microsoft Word or Google Docs is categorized as very productive while social media is very distracting.
From that data, we calculate your productivity pulse — a score out of 100 for how much of your time you spent on activities that you deem productive.
On November 14th, the average productivity pulse across all RescueTime users was a not-so-shabby 60.

How much of our day is spent working on a digital device?

One of the biggest mistakes so many of us make when planning out our days is to assume we have 8+ hours to do productive work. This couldn’t be further from the truth.
What we found is that, on average, we only spend 5 hours a day working on a digital device.
And with an average productivity pulse of 53% for the year, that means we only have 12.5 hours a week to do productive work.

What does the average “productive day” look like?

Understanding our overall productivity is a fun exercise, but our data lets us go even deeper.
Looking at the workday (from 8am–6pm, Monday to Friday), how are we spending our time? When do we do our best work? Do different tasks normally get done at different times?
Here’s what we found out:

Our most productive work happens on Wednesdays at 3pm

Our data showed that we do our most productive work (represented by the light blue blocks) between 10 and noon and then again from 2–5pm each day. However, breaking it down to the hour, we do our most productive work on Wednesdays at 3pm.
Light blue represents our most productive work

Email rules our mornings, but never really leaves us alone

Our days start with email, with Monday morning at 9am being the clear winner for most time spent on email during the week.
Light blue represents our busiest time for emails

Software developers don’t hit peak productivity until 2pm each day

What about how specific digital workers spend their days?
Looking at the time spent in software development tools, our data paints a picture of a workday that doesn’t get going until the late morning and peaks between 2–6pm daily.
Light blue represents when we’re using software development tools

While writers are more likely to be early birds

For those who spend their time writing, it’s a different story.
Writing apps were used more evenly throughout each day with the most productive writing time happening on Tuesdays at 10am.
Light blue represents when we’re using writing tools

What were the biggest digital distractions of 2017?

It’s great to pat ourselves on the back about how productive we were in 2017. But we live in a distracted world and one of our greatest challenges is to stay focused and on task.
Here’s what our research discovered about the biggest time wasters of last year:

On an average day we use 56 different apps and websites

Depending on what you do, this number might not seem that bad. However, when we look at how we use those different apps and websites, things get a bit hairier.
When it comes to switching between different apps and websites (i.e. multitasking), we jump from one task to another nearly 300 times per day and switch between documents and pages within a site 1,300 times per day.

For Slack users, 8.8% of our day is spent in the app

There’s been a lot of talk about how much email and communication eats into our days. But what do the numbers look like?
What we found is that for people who use Slack as their work communication tool, they spend almost 10% of their workday in the app (8.8% to be exact).

We check email or IM 40 times every day

What’s more telling is how often we check our communication tools, whether email or instant messengers like Slack or HipChat.
On average, we check our communication apps 40 times a day, or once every 7.5 minutes during our 5 hours of daily digital work time.

Almost 7% of every workday is spent on social media

I’m sure most of us try not to spend time on social media while at work. But our data showed that almost 7% of every workday was spent on social media.
It’s not only time spent that’s the issue, however. On average, we check in on social media sites 14 times per workday, or nearly 3 times an hour during our 5-hour digital day.

So, what does all this tell us about how we spend our days?
Well, first off, we need to remember that averages shouldn’t be treated as universal truths. Everyone works differently. But having a high-level look at productivity and the things that get in its way is a powerful tool in improving how you work.
The biggest piece of advice we can pull from all this data is to be aware of the limited time you have each day for meaningful work, and spend it wisely.
Our days are filled with distractions, and it’s up to us to protect what time we have.

Interested for our works and services?
Get more of our update !