Samsung has launched the world's first-ever 4-rear camera smartphone Samsung Galaxy A9 (2018). The phone was launched at an event held in Malaysia's Kuala Lumpur on Thursday.
The biggest feature of the Samsung Galaxy A9 (2018) is the launch of
the Samsung Galaxy A9 (2018) 4 rear camera and has become the world's
first smartphone with 4-rear camera. Let's tell you that the Galaxy A7 was launched with three rear cameras.
Samsung Galaxy A9 (2018) specification
This phone has the Android Orio 8.1 and 6.3-inch Full HD Plus Super Amoled display with dual SIM support. Apart from this, the phone will have Qualcomm Snapdragon 660 processor, up to 8 GB of RAM and 128 GB of storage.
Samsung Galaxy A9 is a 4-rear camera in 2019 with a 24-megapixel main
lens, the second lens is a 10-megapixel telephoto with 2x optical zoom. The third lens is a 8-megapixel ultra wide angle lens and a fourth 5-megapixel lens. The four cameras are from the top down from the same line. The front has a 24-megapixel camera.
Samsung Galaxy A9 (2018) has a 3800 mAh battery that supports fast charging. There will be a fingerprint sensor in the phone's power button.
Price of Samsung Galaxy A9 (2018)
The price of Samsung Galaxy A9 (2018) is 599 euros, which is approximately Rs 51,300. However, there is still no explanation about how much Samsung Galaxy A9 (2018) will be worth in India. This phone will be available in Bubblegum Pink, Caver Black and Lemonade Blue Color Variants.
This is a difficult post for me to write. It’s a post about Apple — yet it’s not
the same Apple where I spent 22 years of my career. It’s also a post
about competent management — and, the utter failure of leadership.starti
You’ve
probably seen the headlines by now. Apple recently rolled out an update
that slows down older phones, ostensibly in an effort to preserve the
life of aging batteries.
The thing is, Apple didn’t tell
anyone that this was happening; a lot of iPhone users upgraded to newer
models, when they could have simply bought new batteries — a much
smaller financial investment — and continued to use their old phones.
It’s
been a public relations nightmare, with multiple class action suits
already filed. And Apple’s solution to the problem has been to
apologize — rather feebly, and only after the whole thing was uncovered
by a Reddit user — and knock down the battery replacement cost to $29.
(It normally runs about $79.)
This is unbelievable to me.
When
I was at Apple in the early 2000s, I ran into a somewhat similar
problem, albeit on a much smaller scale. About 800 iBooks (yes there was
actual hardware called an iBook), all of them in university settings,
started exhibiting problems with their CD trays.
We acted quickly, and replaced every single one of those 800 units, no questions asked.
I know for a fact that we lost a couple of customers to Microsoft over this. I also know that we did the right thing. We were proud to have done the right thing. And most of our customers appreciated it.
Even
with this slight inconvenience, they felt good about how we were
treating them. Our response to the hardware malfunction enhanced our
brand and our reputation.
Again: The Apple you’re reading about today is not the same company I worked for all those 22 years.
I can think of so many better ways they could have handled this:
1. The bestsolution
would have been to just be upfront with customers in the first place.
Say, “Hey, we’re glad you enjoy your old-school iPhone, but you’re going
to be left behind; in order to download the latest iOS updates, you
need to upgrade to a newer device.”
This kind of thing is, of course, totally
normal in the tech world; you can’t run the latest macOS on an older
MacBook any more than you can run the latest version of Windows on a
1980s PC. Tech changes, and eventually goes obsolete.
2.Another solution?
In response to the aging battery issue, offer a coupon to those
old-school iPhone users, giving them 50 percent off an iPhone 8. This is
a feel-good solution — a new phone for a fraction of the price! Plus, it gets people into the Apple Store, and makes them actually happy.
3.
Apple could even have offered to replace those old batteries in the
store, free of charge — an inconvenient and cumbersome solution, but at
least it would have shown some real customer service initiative. And
again, it would generate traffic to the Apple Store and an opportunity
to upgrade. Has everyone forgotten about the traffic conversion factor?
Any of those solutions would have been preferable to Apple’s secretive software upgrade — which, again, we only
know about through social media users, not because Apple was
forthcoming about it — to say nothing of its lame apology and its
trifling $29 battery offer.
Here
I might note that, according to some of my sources on the inside, the
actual cost of a battery is in the single digits — so the fact that
Apple is still making people
pay $29 for a new one, in the face of a major PR scandal and with $200
billion in the reserves, is absolutely stunning.
Sure: In the short term, Apple’s saving a few bucks. That’s because the company is managing this problem well.
Managing
a problem means getting through it with minimum trouble to the company.
It involves a focus on numbers and accounting, but a short-sightedness
when it comes to relationships and customer goodwill.
Instead of managing the problem, Apple should be leading it — not doing the bare minimum to save its neck, but doing the right thing, taking pride in doing the right thing, and trusting that customers will appreciate it. That’s what leadership means.
In
other words, Apple should be thinking a few steps ahead, and realizing
that a few bucks for free battery replacements (or discounted iPhone
upgrades) mean nothing compared to the loss of goodwill the company now faces.
Goodwill (or relationships, when you get right down to it) is the most precious commodity it or any other company has. And Apple is squandering it.
And
that’s to say nothing of the lack of communication here — as if Apple’s
executives don’t know the old political adage, that the cover-up is
always worse than the deed.
This
whole episode may be seen as a turning point for Apple — its real
transition from Steve’s company into Tim’s. Tim Cook is a great manager, and he’s certainly managing this situation ably.
But Steve would have done something better: He would have shown leadership.
In
one year, HubSpot doubled the number of certified partners in its
platform ecosystem and increased the number of apps installed by
customers by 142% — here’s why that matters.
As
a result, there’s no lack of cool and exciting software in this space.
If you can imagine just about any creative new capability you’d like for
engaging with your customers, there’s probably a martech startup out
there somewhere building it.
The
challenge, however, is figuring out how to get all these different
tools to work well together — without needing a crack team of IT
engineers to take months wiring them up. As Dr. McCoy from Star Trek might have protested, “Damn it, Jim, I’m a marketer, not a systems architect.”
This is the challenge that a centralized platform can solve.
What exactly makes a SaaS solution a “platform” instead of simply being a product?
Almost every SaaS product today has APIs that let it exchange data with other applications. A platform,
however, plays a more active role in coordinating how multiple products
work together. You can picture a platform as a hub, with spokes
connecting other products to its center. The hub binds those disparate
products together and orchestrates them in a common mission.
A
platform creates a stable center of gravity in your marketing and sales
stack by delivering three main benefits through a centralized:
1. Data Model.
A platform does more than just exchange data with other apps in your
stack. It establishes an organizing model for that data — for instance, a
common identity and record structure for a lead, a customer, a deal,
etc. It maps data from all the other apps connected to it into those
common record formats, enforcing a baseline level of data quality. That
centralized and well-structured database then serves as a shared “source
of truth” for the platform and any other app that wants to tap into it.
2.Workflow and User Experience.Research has found
that marketers and salespeople can lose a lot of time switching between
different applications. A platform reduces that overhead by
establishing a centralized “home base” where most users can do the
majority of their work. In addition to providing a common view of shared
data across apps, it also becomes the center of their workflow for most
activities — especially if apps embed key features directly into
platform’s user interface. Individual users might still log into other
apps for more specialized tasks, but there’s much less day-to-day app
switching across your organization.
3.Certification Authority.
When you integrate apps on your own, you must take full responsibility
for making sure that everything plays well together. A platform lifts
some of that burden off your shoulders by establishing a trusted
certification process for apps in its ecosystem. Certified apps will
integrate smoothly, and you’re assured that they’ve been reviewed for a
certain level of compatibility. A helpful directory of all certified apps maintained by the platform company can also make it easier to find the right app to add whenever a particular need arises.
All
of these factors help lower the organizational costs of adopting
multiple products in your marketing and sales stack, by reducing
friction in their selection, installation, and use.
The Growth Dynamics of Platform Ecosystems
To
get a sense of how well a platform is doing at delivering those
benefits, you can look at two key indicators of ecosystem health through
growth of:
1.The number of apps installed by customers.
If more platform customers are installing more certified apps, that’s
one of the strongest signals that there’s real value in the ecosystem
for them. If installing or using apps is difficult — or ultimately
doesn’t achieve results — this metric stalls.
2.The number of certified apps.
Quality matters more than quantity when it comes to a platform
ecosystem. An app directory filled with a bunch of low-quality apps
creates more confusion than clarity. But if the number of high-quality
certified apps is growing, it’s a good sign that the platform dynamics
are working for app developers too. A platform that makes it easier for
businesses to successfully adopt more apps naturally attracts more
developers.
By both of these measures, the HubSpot platform had a good year in 2017.
The
number of apps installed by HubSpot customers on our platform increased
142%, and the number of certified apps in our Connect partner ecosystem
grew by 108%.
The
graphic at the top of this post illustrates what our platform ecosystem
looks like here at the start of 2018. You can also browse our updated integrations directory
to learn more about all the different capabilities these app developers
have to offer. We’re anticipating further expansion in the year ahead.
While we still have much work to do — we aspire to build a truly lovable platform,
and we hold that as a very high bar — we’re excited about the growing
momentum in our ecosystem. But most of all, we’re delighted to see our
customers getting measurable benefits from our platform by effectively
integrating more specialized capabilities into their marketing and sales
stacks.
In
one of my earliest roles at a B2B startup, there were so many fires in a
day, that if no “emergencies” occurred for even a couple of hours, I
sensed something was wrong immediately. It got to the point where I knew
hundreds of clients by name because of how frequently I needed to do
damage control.
Meanwhile,
new products, new features, new services, were continuously released as
we aimed to stay on the ‘cutting edge’ of technology. With limited
resources and a mission to stay innovative, low impact bugs and minority
clients were deemed low priority. I watched clients cancel and support
staff burn out.
Internally,
the “importance of customer service excellence” was reiterated time and
time again through every possible means — email, chat, message boards,
meetings, handbooks, training workshops, etc. Pull aside any employee at
random and they could mindlessly regurgitate that it was one of the
company core values. In reality, we missed the mark by a long shot.
Where was the disconnect? The answer isn’t black and white, but I saw two areas contributing the most to this issue.
Not all clients were treated equal. With the mentality of move fast and break things,
beta users or clients who contributed to advancing the product/software
were implicitly given priority. This is not necessarily a bad thing if there was load balancing to ensure sufficient support for the majority of the client base — paying customers with expectations.
Innovation was prioritized over maintenance.
Yes, complacency is dangerous and it is important to grow, to scale.
But at what expense? With stretched resources, it can be easy to neglect
seemingly ‘low impact’ bugs and glitches. The result? A team of support
staff unequipped to provide long-term solutions to recurring issues for
clients that reach out again, and again, and again.
Let’s break this down.
When
a startup makes the transition from early stage (looking for market
validation), into a growth stage, there is no longer the luxury of only dealing with ‘Innovators’ and ‘Early Adopters.’
This. Is. Not. A. Bad. Thing.
Great
technology that is lucky enough to have reached ‘product market fit’
serves a need, fills a gap, or solves a problem. Here’s the thing.
Clients onboarding at this point — the ‘Early Majority’ — have an
inherent expectation that they can reliably use the product. There is a
lower tolerance for inconsistency, errors, glitches.
Most, if not all, are not
willing guinea pigs supporting your grander, ultimate vision. They do
not care about that. They did not sign up for that. They want the tool
they paid for to work. They want it to work, the way it’s designed to,
when it’s supposed to, so they can go about their day running their own
businesses.
What am I saying?
The crux of it is this. There comes a point when innovation can wait. The point where the difference between success and failure is execution. Not the idea. Not intelligence. Consistent execution.
I
get the sense that many startups thrive on the concept of organized
chaos and inherently reject structure. Perhaps it’s a cultural thing.
Perhaps some startups remain functional on this model. However,
organized chaos is still chaos. And I, for one, can not imagine
operational efficiency being optimal on a model of chaos.
There comes a time for structure, which does not
have to equal rigidity. But it needs to create stability. While this
will mean something different for every company, there are some general
things. I’m talking about standard operating procedures. Enforcing
internal processes (e.g. clients are not QA, production environments are
not meant for testing… test the code!). And please, documentation can
no longer be optional.
Stability is just as important as scalability. Hate to say it, but scaling on an unstable foundation is stupidity. Especially when hubris allows a company to believe they can get away with it.
The
company I referred to at the start of the article was a SaaS startup
utilizing a subscription model. While it was vital to retain all our
customers, the cost of switching platforms was often too high and there
weren’t comparable programs on the market. As a result, the team
calibrated to the errors and took our clients’ tolerance for granted.
There
is nothing more detrimental to a business than falling into the trap of
believing their technology is great enough to outweigh good service.
As
a designer starting out in the beginning of your career, you may not
know what to expect during your first job. You could be given lots of
work and because you are the new designer on team, you do things without
question. You might think you are expected to know everything because
nobody said you should seek out the things you need to help you.
Having
worked in the design industry almost every summer in college, I’ve
learned a thing or two about how a new designer, such as myself, can
navigate through challenges and learn in environments based on implied
messages of what we should or shouldn’t do. Knowing the basic tools and
techniques of good design is essential, but it’s the small details
surrounding how we work which can help us progress and open doors. Here
are a few tips that growing designers should take into consideration
during their first year on the job to accelerate career growth.
Asking for Help Doesn't Make You Stupid
It’s
okay to ask for help, but the issue that some designers may allude to
when they say asking for help is a big no-no is the phrasing. Instead of
directly asking for help, ask for feedback and advice.
If you need help with doing research, join a research session. If you
need help with moving forward in a project, ask designers to join you in
prioritizing ideas. This will provide you with direction. Instead of
receiving a hard-cut answer, you receive validation and perspective,
things that will help you develop your own point of view. Designers don’t receive answers, they problem solve to get there.
Saying “No” is better than saying “Yes” all the time*
Note
the asterisk. You are in control of what you want to do. You can decide
when you reply to that e-mail or if you want to go that meeting. We are
often given so many things to do that we can’t do all of them, yet we
think we have to. Many designers, especially in the beginning of their
career, do everything they are told to do, and this distracts them from
the work they need to do the most. Decide on what is most important to
help get your work done and prioritize.
Don’t say yes for the things that get in the way of producing quality work.
Delegating
tasks and prioritizing is hard, but if you can do that, you will get so
much done (and more). It’s okay to say no for valid reasons because it
tells people that you know what’s important.
Speak up
During
a critique, we are excepted to provide feedback for our peers, but not
everyone does it because they might be self concious of their thoughts,
or they don’t make the effort to help. Don’t be selfish with ideas.
Ideas are meant to be expressed and help our fellow designers design for
the people. Feedback is a gift. Feedback is what results in more iterations and better experiences.
Take Breaks
I
used to work hard constantly, whether it was at home, with friends and
family…You name it. But then I realized, without fault, I will be
working for the rest of my life and work isn’t ever really “done”. I was
taking the time to work on something fleeting, when I could have been
spending time with the people I loved and the things I loved to do
outside of work. Also, too much work can increase stress which can
increase burnout. It makes sense to do as much work as you can to get to
a certain job or rank, but that takes time. Just do what you can and
relax when you feel overworked or exausted. In the end, health is more important than work because without health, we can’t work.
Be Present
As
tempting as it is to work from home, especially for people who have the
privilege of doing so all the time, it is crucial to be present. Even
if the quality of work has not been affected, as designers,
collaboration is such an important aspect of the way we do things. Being
present in the office can make all the difference, especially when
working with the people on your team. It’s not a team if everyone isn’t present.
If you have any questions about design, message me on LinkedIn and I’ll write about it!
Voice is the future. The world’s technology giants are clamoring for vital market share, with ComScore projecting that “50% of all searches will be voice searches by 2020.”
However,
the historical antecedents that have led us to this point are as
essential as they are surprising. Within this report, we take a trip
through the history of speech recognition technology, before providing a
comprehensive overview of the current landscape and the tips that all
marketers need to bear in mind to prepare for the future.
The History of Speech Recognition Technology
Speech
recognition technology entered the public consciousness rather
recently, with the glossy launch events from the tech giants making
worldwide headlines.
The appeal is instinctive; we are fascinated by machines that can understand us.
From
an anthropological standpoint, we developed the spoken word long in
advance of its written counterpart and we can speak 150 words per
minute, compared with the paltry 40 words the average person can type in
60 seconds.
In
fact, communicating with technological devices via voice has become so
popular and natural that we may be justified in wondering why the
world’s richest companies are only bringing these services to us now.
The
history of the technology reveals that speech recognition is far from a
new preoccupation, even if the pace of development has not always
matched the level of interest in the topic. As we can see below, major
breakthroughs dating back to the 18th century have provided the platform
for the digital assistants we all know today.
The
earliest advances in speech recognition focused mainly on the creation
of vowel sounds, as the basis of a system that might also learn to
interpret phonemes (the building blocks of speech) from nearby
interlocutors.
These
inventors were hampered by the technological context in which they
lived, with only basic means at their disposal to invent a talking
machine. Nonetheless, they provide important background to more recent
innovations.
Dictation
machines, pioneered by Thomas Edison in the late 19th century, were
capable of recording speech and grew in popularity among doctors and
secretaries with a lot of notes to take on a daily basis.
However,
it was not until the 1950s that this line of inquiry would lead to
genuine speech recognition. Up to this point, we see attempts at speech
creation and recording, but not yet interpretation.
Audrey,
a machine created by Bell Labs, could understand the digits 0–9, with a
90% accuracy rate. Interestingly, this accuracy level was only recorded
when its inventor spoke; it hovered between 70% and 80% when other
people spoke to Audrey.
This
hints at some of the persistent challenges of speech recognition; each
individual has a different voice and spoken language can be very
inconsistent. Unlike text, which has a much greater level of
standardization, the spoken word varies greatly based on regional
dialects, speed, emphasis, even social class and gender. Therefore,
scaling any speech recognition system has always been a significant
obstacle.
Alexander
Waibel, who worked on Harpy, a machine developed at Carnegie Mellon
University that could understand over 1,000 words, built on this point:
“So
you have things like ‘euthanasia’, which could be ‘youth in Asia’. Or
if you say ‘Give me a new display’ it could be understood as ‘give me a
nudist play’.”
Until
the 1990s, even the most successful systems were based on template
matching, where sound waves would be translated into a set of numbers
and stored. These would then be triggered when an identical sound was
spoken into the machine. Of course, this meant that one would have to
speak very clearly, slowly, and in an environment with no background
noise to have a good chance of the sounds being recognized.
IBM
Tangora, released in the mid-1980s and named after Albert Tangora, then
the world’s fastest typist, could adjust to the speaker’s voice. It
still required slow, clear speech and no background noise, but its use
of hidden Markov models allowed for increased flexibility through data
clustering and the prediction of upcoming phonemes based on recent
patterns.
Although
it required 20 minutes of training data (in the form of recorded
speech) from each user, Tangora could recognize up to 20,000 English
words and some full sentences.
The
seeds are sown here for voice recognition, one of the most significant
and essential developments in this field. It was a long-established
truism that speech recognition could only succeed by adapting to each
person’s unique way of communicating, but arriving at this breakthrough
has been much easier said than done.
It
was only in 1997 that the world’s first “continuous speech recognizer”
(ie. one no longer had to pause between each word) was released, in the
form of Dragon’s NaturallySpeaking software. Capable of understanding
100 words per minute, it is still in use today (albeit in an upgraded
form) and is favored by doctors for notation purposes.
Machine
learning, as in so many fields of scientific discovery, has provided
the majority of speech recognition breakthroughs in this century. Google
combined the latest technology with the power of cloud-based computing
to share data and improve the accuracy of machine learning algorithms.
This culminated in the launch of the Google Voice Search app for iPhone in 2008.
Driven
by huge volumes of training data, the Voice Search app showed
remarkable improvements on the accuracy levels of previous speech
recognition technologies. Google built on this to introduce elements of
personalization into its voice search results, and used this data to
develop its Hummingbird algorithm, arriving at a much more nuanced
understanding of language in use. These strands have been tied together
in the Google Assistant, which is now resident on almost 50% of all
smartphones.
It
was Siri, Apple’s entry into the voice recognition market, that first
captured the public’s imagination, however. As the result of decades of
research, this AI-powered digital assistant brought a touch of humanity
to the sterile world of speech recognition.
After
Siri, Microsoft launched Cortana, Amazon launched Alexa, and the wheels
were set in motion for the current battle for supremacy among the tech
giants’ respective speech recognition platforms.
In
essence, we have spent hundreds of years teaching machines to complete a
journey that takes the average person just a few years. Starting with
the phoneme and building up to individual words, then to phrases and
finally sentences, machines are now able to understand speech with a
close to 100% accuracy rate.
The
techniques used to make these leaps forward have grown in
sophistication, to the extent that they are now loosely based on the
workings of the human brain. Cloud-based computers have entered millions
of homes and can be controlled by voice, even offering conversational
responses to a wide range of queries.
That journey is still incomplete, but we have travelled quite some distance from the room-sized computers of the 1950s.
The Current Speech Recognition Landscape
Smartphones
were originally the sole place of residence for digital assistants like
Siri and Cortana, but the concept has been decentralized over the past
few years.
At
present, the focus is primarily on voice-activated home speakers, but
this is essentially a Trojan horse strategy. By taking pride of place in
a consumer’s home, these speakers are the gateway to the proliferation
of smart devices that can be categorized under the broad ‘Internet of
Things’ umbrella. A Google Home or Amazon Echo can already be used to
control a vast array of Internet-enabled devices, with plenty more due
to join the list by 2020. These will include smart fridges, headphones,
mirrors, and smoke alarms, along with an increased list of third-party
integrations.
Recent Google research
found that over 50% of users keep their voice-activated speaker in
their living room, with a sizeable number also reporting that they have
one in their bedroom or kitchen.
And
this is exactly the point; Google (and its competitors) want us to buy
more than one of these home devices. The more prominent they are, the
more people will continue to use them.
Their
ambition is helped greatly by the fact that the technology is now
genuinely useful in the accomplishment of daily tasks. Ask Alexa, Siri,
Cortana, or Google what the weather will be like tomorrow and it will
provide a handy, spoken summary. It is still imperfect, but speech
recognition has reached an acceptable level of accuracy for most people
now, with all major platforms reporting an error rate of under 5%.
As
a result, these companies are at pains to plant their flag in our homes
as early as possible. Hardware, for example in the shape of a home
speaker system, is not something most of us purchase often. For example,
if a consumer buys a Google Home, it seems probable that they will
complement this with further Google-enabled devices, rather than
purchase from a rival company and create a disjointed digital ecosystem
under their roof. Much easier to seek out devices that will enable
continuity and greater convenience.
For this reason, it makes sense for Amazon to sell the Echo Dot for as low as $29.99. That equates to a short-term financial loss for Amazon on each device sold, but the long-term gains will more than make up for it.
There are estimated to be 33 million smart speakers in circulation already (Voice Labs report, 2017) and both younger and older generations are adopting the technology at a rapid rate.
In
fact, the demographics of an assistant “superuser,” someone who spends
twice the amount of time with personal assistants on a monthly basis
than average — is a 52-year old woman, spending 1.5 hours per month with
assistant apps.
Perhaps
most importantly for the major tech companies, consumers are
increasingly comfortable making purchases through their voice-enabled
devices.
Google
reports that 62% of users plan to make a purchase through their speaker
over the coming month, while 58% use theirs to create a weekly shopping
list:
Short-term
conclusions about the respective business strategies of Amazon and
Google, in particular, are relatively easy to draw. The first-mover
advantage looks set to be marked in this arena, especially as speech
recognition continues to develop into conversational interactions that
lead to purchases.
We have written before
about the two focal points of the voice search strategy for the tech
giants: the technology should be ubiquitous and it must be seamless.
Voice is already a multi-platform ecosystem, but we are some distance
from the ubiquity it seeks.
To
gain insight into the likely outcome of the current competition, it is
worth assessing the strengths and weaknesses of the four key players in
western markets: Amazon, Google, Apple, and Microsoft.
Amazon
First-party Hardware: Echo, Echo Dot, Echo Show, Fire TV Stick, Kindle.
Digital Assistant: Alexa
Usage Statistics:
“Tens of millions of Alexa-enabled devices” sold worldwide over the 2017 holiday season (Amazon)
75% of all smart speakers sold to date are Amazon devices (Tech Republic)
The
Echo Dot was the number one selling device on Amazon over the holidays,
with the Alexa-enabled Fire TV stick in second place. (Amazon)
The
average Alexa user spends 18 minutes a month interacting with the
device, compared to just five minutes for Google Home (Gartner)
There are now over 25,000 skills available for Alexa (Amazon)
Overview:
The
cylindrical Echo device and its younger sibling, the Echo Dot, have
been the runaway hit of the smart speaker boom. By tethering the
speakers to a range of popular third-party services and ‘skills’, Amazon
has succeeded in making the Echo a useful addition to millions of
households.
As Dave Limp, head of Amazon devices, put it recently,
“We think of it as ambient computing, which is computer access that’s less dedicated personally to you but more ubiquitous.”
Ubiquity seems a genuine possibility, based on the sales figures.
After
a holiday season when the Echo Dot became the most popular product on
Amazon worldwide, the Alexa app occupied top position in the App Store,
ahead of Google’s rival product.
Amazon’s
heritage as an online retailer gives it an innate advantage when it
comes to monetizing the technology, too. The Whole Foods acquisition
adds further weight to this, with the potential to integrate the offline
and online worlds in a manner other companies will surely envy.
Moreover,
Amazon has never depended on advertising to keep its stock prices
soaring. Quite the contrary, in fact. As such, there is less short-term
pressure to force this aspect of their smart speakers.
With
advertisers keen to find a genuine online alternative to Google and
Facebook, Amazon is in a great position to capitalize. There is a fine
balancing act to maintain here, nonetheless. Amazon has most to lose, in
terms of consumer trust and reputation, so it will only move into
advertising for Alexa carefully.
The company denies it has plans to do so, but as research company L2 Inc wrote recently,
Amazon
has approached major brands asking if they would be willing to pay for
Amazon’s Choice, a designation given to best-in-class products in a
particular category.
We
should expect to see more attempts from Amazon to provide something
beyond just paid ads on search results. Voice requires new advertising
solutions and Amazon will tread lightly at first to ensure it does not
disrupt the Alexa experience. The recently announced partnership with publishing giant Hearst is a sign of things to come.
The
keys to Alexa’s success will be the integration of Amazon’s own assets,
along with the third-party support that has already led to the creation
of over 25,000 skills. With support announced
for new headphones, watches, fridges, and more, Amazon looks set to
stay at the forefront of voice recognition technology for some time to
come.
Google
First-party Hardware: Google Home, Google Home Mini, Google Home Max, Pixelbook, Pixel smartphones, Pixel Buds, Chromecast, Nest smart home products.
Digital Assistant: Google Assistant
Usage Statistics:
Google Home has a 24% share of the US smart speaker market (eMarketer)
There are now over 1,000 Actions for Google Home (Google)
Google Assistant is available on over 225 home control brands and more than 1,500 devices (Google)
The most popular Google Assistant apps are games, followed closely by home control applications (Voicebot.ai)
Overview:
Google
Assistant is directly tied to the world’s biggest search engine,
providing users with direct access to the largest database of
information ever known to mankind. That’s not a bad repository for a
digital assistant to work with, especially as Google continues to make
incremental improvements to its speech recognition software.
Recent
research from Stone Temple Consulting across 5,000 sample queries found
Google to be the most accurate solution, by quite some distance:
Combined
with Google Photos, Google Maps, YouTube, and a range of other
effective services, Google Assistant has no shortage of integration
possibilities.
Google
may not have planned to enter the hardware market again after the
lukewarm reception for its products in the past. However, this new
landscape has urged the search giant into action in a very serious way.
There is no room for error at the moment, so Google has taken matters
into its own hands with the Pixel smartphones, the Chromecast, and of
course the Home devices.
The
Home Mini has been very popular, and Google has added the Home Max to
the collection, which comes in at a higher price than even the Apple
HomePod. All bases are very much covered.
Google
knows that the hardware play is not a long-term solution. It is a
necessary strategy for the here and now, but Google will want to
convince other hardware producers to integrate the Assistant, much in
the same way it did with Android smartphone software. That removes the
expensive production costs but keeps the vital currency of consumer
attention spans.
This plan is already in action, with support just announced for a range of smart displays:
This
adds a new, visual element to consumer interactions with smart speakers
and, vitally, brings the potential to use Google Photos, Hangouts, and
YouTube.
Google
also wants to add a “more human touch” to its AI assistant and has
hired a team of comedians, video game designers, and empathy experts to
inject some personality.
Google
is, after all, an advertising company, so the next project will be to
monetize this technology. For now, the core aim is to provide a better,
more human experience than the competition and gain essential territory
in more households. The search giant will undoubtedly find novel ways to
make money from that situation.
Although
it was slower off the mark than Amazon, Google’s advertising nous and
growing range of products mean it is still a serious contender in both
the short- and long-term.
Apple
Hardware: Apple HomePod (Due to launch in 2018 at $349), iPhone, MacBooks, AirPods
Digital Assistant: Siri
Usage Statistics:
42.5% of smartphones have Apple’s Siri digital assistant installed (Highervisibility)
41.4 million monthly active users in the U.S. as of July 2017, down 15% on the previous year (Verto Analytics)
19% of iPhone users engage with Siri at least daily (HubSpot)
Overview:
Apple
retains an enviable position in the smartphone and laptop markets,
which has allowed it to integrate Siri with its OS in a manner that
other companies simply cannot replicate. Even Samsung, with its Bixby
assistant, cannot boast this level of synergy, as its smartphones
operate on Android and, as a result, have to compete with Google
Assistant for attention.
Nonetheless,
it is a little behind the curve when it comes to getting its hardware
into consumers’ home lives. The HomePod will, almost certainly, deliver a
much better audio experience than the Echo Dot or Google Home Mini,
with a $350 price tag to match. It will contain a host of impressive
features, including the ability to judge the surrounding space and
adjust the sound quality accordingly.
The
HomePod launch has been delayed, with industry insiders suggesting that
Siri is the cause. Apple’s walled garden approach to data has its
benefits for consumers, but it has its drawbacks when it comes to
technologies like voice recognition. Google has access to vast
quantities of information, which it processes in the cloud and uses to
improve the Assistant experience for all users. Apple does not possess
this valuable resource in anything like the same quantity, which has
slowed the development of Siri since its rise to fame.
That said, these seem likely to be short-term concerns.
Apple
will stay true to its core business strategy and it is one that is
served it rather well so far. The HomePod will sit at the premium end of
the market and will lean on Apple’s design heritage, with a focus on
providing a superior audio experience. It will launch with support for
Apple Music alone, so unless Apple opens up its approach to third
parties, it could be one for Apple fans only. Fortunately for Apple,
there are enough of those to ensure the product gains a foothold.
Whether its
Microsoft
Hardware: Harman/kardon Invoke speaker, Windows smartphones, Microsoft laptops
Digital Assistant: Cortana
Usage Statistics:
5.1% of smartphones have the Cortana assistant installed
Cortana now has 133 million monthly users (Tech Radar)
25% of Bing searches are by voice (Microsoft)
Overview:
Microsoft
has been comparatively quiet on the speech recognition front, but it
possesses many of the component parts required for a successful speech
recognition product.
With
a very significant share of the business market, the Office suite of
services, and popular products like Skype and LinkedIn, Microsoft
shouldn’t be written off.
Apple’s
decision to default to Google results over Bing on its Siri assistant
was a blow to Microsoft’s ambitions, but Bing can still be a competitive
advantage for Microsoft in this arena. Bing is a source of invaluable
data and has helped develop Cortana into a much more effective speech
recognition tool.
The
Invoke speaker, developed by Harman/kardon with Cortana integrated into
the product, has also been reduced to a more approachable $99.95.
There
are new Cortana-enabled speakers on the way, along with smart home
products like thermostats. This should see its levels of uptake
increase, but the persistent feeling is that Microsoft may be a little
late to this party already.
Where
Microsoft can compete very credibly is in the office environment, which
has also become a central consideration for Amazon. Microsoft is
prepared to take a different route to gain a foothold in this market,
but it could still be a very profitable one.
The Future of Speech Recognition Technology
We
are still some distance from realizing the true potential of speech
recognition technology. This applies both to the sophistication of the
technology itself and to its integration into our lives. The current
digital assistants can interpret speech very well, but they are not the
conversational interfaces that the technology providers want them to be.
Moreover, speech recognition remains limited to a small number of
products.
The rate of progress, compared to the earliest forays into speech recognition, is really quite phenomenal nonetheless.
As
such, we can look into the near future and envisage a vastly changed
way of interacting with the world around us. Amazon’s concept of
“ambient computing” seems quite fitting.
The
smart speaker market has significant room left to grow, with 75% of US
homes projected to have at least one by the end of 2020.
Now
that users are getting over the initial awkwardness of speaking to
their devices, the idea of telling Alexa to boil the kettle or make an
espresso does not seem so alien.
Voice is becoming an interface of its own, moving beyond the smartphone to the home and soon, to many other quotidian contexts.
We
should expect to see more complex input-output relationships as the
technology advances, too. Voice-voice relationships restrict the
potential of the response, but innovations like the Amazon Echo Show and
Google’s support for smart displays will open up a host of new
opportunities for engagement. Apple and Google will also incorporate
their AR and VR applications when the consumer appetite reaches the
required level.
Challenges
remain, however. First of all, voice search providers need to figure
out a way to provide choice through a medium that lends itself best to
short responses. Otherwise, how would it be possible to ensure that a
user is getting the best response to their query, rather than the
response with the highest ad budget behind it?
Modern
consumers are savvy and have access to almost endless information, so
any misjudgements from brands will be documented and shared with the
user’s network.
A
new study from Google has shown that there is an increasing acceptance
among consumers that brands will use smart speakers to communicate with
them. A sizeable number revealed a willingness to receive information
about deals and sales, with almost half wanting to receive personalized
tips:
Speech
recognition technology provides the platform for us to communicate
credibly, but it is up to marketers to make the relationship with their
audience mutually beneficial.
Key Takeaways
Brands
need to consider how they can make an interaction more valuable for a
consumer. The innate value proposition of voice search is that it is
quick, convenient, and helpful. It is only by assimilating with — and
adding to -this relationship between technology and consumer that they
will cut through. The Beauty and the Beast example provides an early,
cautionary tale for all of us.
Amazon
is in prime position to monetize its speech recognition technology, but
still faces obstacles. Sponsorship of Amazon’s Choice has been explored
as a route to gain revenue without losing customers.
Google
has made speech recognition a central focus for the growth of their
business. With a vast quantity of data at its disposal and increasing
third-party support, Google Assistant will provide a serious threat to
Amazon’s Alexa this year.
Marketers
should take advantage of technical best practices for voice search to
increase visibility today. While this technology is still developing, we
need to give it a helping hand as it completes its mammoth tasks.
The
best way to understand how people use speech recognition technology is
to engage with it frequently. Marketers serious about pinpointing areas
of opportunity should be conducting their own research at home, at work,
and on the go.
Hardik Gandhi is Master of Computer science,blogger,developer,SEO provider,Motivator and writes a Gujarati and Programming books and Advicer of career and all type of guidance.