Programming & IT Tricks . Theme images by MichaelJay. Powered by Blogger.

Copyright

Facebook

Post Top Ad

Search This Blog

Post Top Ad

Responsive Ads Here

Archive

Post Top Ad

Contact


Editors Picks

Follow us

Post Top Ad

Fashion

Music

News

Sports

Food

Technology

Featured

Videos

Fashion

Technology

Fashion

Label

Translate

About

Translate

Sponsor

test

Weekly

Comments

Recent

Connect With us

Over 600,000+ Readers Get fresh content from FastBlog

About

Showing posts with label amazon. Show all posts
Showing posts with label amazon. Show all posts

Thursday, October 11, 2018

Samsung Galaxy A9 (2018), the world's first smartphone with 4-inch rear camera launched


Samsung has launched the world's first-ever 4-rear camera smartphone Samsung Galaxy A9 (2018). The phone was launched at an event held in Malaysia's Kuala Lumpur on Thursday. The biggest feature of the Samsung Galaxy A9 (2018) is the launch of the Samsung Galaxy A9 (2018) 4 rear camera and has become the world's first smartphone with 4-rear camera. Let's tell you that the Galaxy A7 was launched with three rear cameras.
Samsung Galaxy A9 (2018) specification

This phone has the Android Orio 8.1 and 6.3-inch Full HD Plus Super Amoled display with dual SIM support. Apart from this, the phone will have Qualcomm Snapdragon 660 processor, up to 8 GB of RAM and 128 GB of storage.

Samsung Galaxy A9 is a 4-rear camera in 2019 with a 24-megapixel main lens, the second lens is a 10-megapixel telephoto with 2x optical zoom. The third lens is a 8-megapixel ultra wide angle lens and a fourth 5-megapixel lens. The four cameras are from the top down from the same line. The front has a 24-megapixel camera.

Samsung Galaxy A9 (2018) has a 3800 mAh battery that supports fast charging. There will be a fingerprint sensor in the phone's power button.
Price of Samsung Galaxy A9 (2018)

The price of Samsung Galaxy A9 (2018) is 599 euros, which is approximately Rs 51,300. However, there is still no explanation about how much Samsung Galaxy A9 (2018) will be worth in India. This phone will be available in Bubblegum Pink, Caver Black and Lemonade Blue Color Variants.

Sunday, January 28, 2018

Up close with Apple HomePod, Siri’s expensive new home


Hello, HomePod. Image courtesy of Apple
If it were only a question of quality, Apple’s HomePod, which, after a months-long delay finally ships on February 9, should be an unqualified success. Its audio quality is excellent, especially considering its size.
Seven months ago, I sat is a small room and heard Apple’s 7-inch smart speaker play music for the first time. It sounded good, but the demonstration was short and lacking a key component of the smart speaker’s feature set: Siri integration.
Recently, though, I heard Apple’s HomePod again in a variety of scenarios and spaces. It sounded even better, especially when compared to larger Google Home Max and the aurally excellent Sonos One, the HomePod’s separation of sounds and fidelity to original instrumentation is astonishing.
This listening experience also added the smarts, or utility, that was missing back in June. Apple’s HomePod is, finally, a functioning Siri smart speaker.
Using the trigger phrase “Hey Siri,” HomePod responded to a variety of common Siri questions, activated HomeKit-enabled smart device tasks, and launched Siri-driven tasks, most revolving around Apple Music.
Put simply, Apple’s HomePod appears as good a smart speaker as most and a better audio device than many. However, it’s telling that Apple compares its first smart speaker to both the $399 Google Home Max and the $99.99 All New Amazon Echo. At $349, the HomePod is more expensive than virtually all of Amazon’s Echo line and most Google Home devices. The more comparably sized Google Home lists for $129.
This is a crucial moment for Siri, the voice assistant that now, according to Apple, has 500M monthly active devices. It lives in our iPhone, iPads and on our Apple Watches, but, until now, has never had a permanent place in the home. And it faces an uphill battle.
HomePod’s enters a crowded smart speaker market, one that Amazon owns, with a $350 product. This means Apple must work twice as hard to sell consumers on the HomePod’s ease-of-setup, standout audio qualities and its deep integration with the iOS, Siri and HomeKit ecosystem.
Does all that make it worth it? Let’s walk through some of the particulars and maybe you can decide.

Using the HomePod

From the outside, the HomePod looks like a mesh-covered Mac Pro (it comes in white and space gray). Underneath, there’s a stacked array of audio technology, starting with seven horn-loaded tweeters at the base, a six-microphone array in the center and the sizeable woofer, with a claimed 22mm of travel, pointed straight up at the ceiling. Apple’s A8 chip handles the signal processing.
It is, in all an excellent hardware package that, unlike most of the other smart speakers, uses its own microphones to adjust audio for each listening environment.
The matrix of audio components is not inconsequential. In my listening party, songs like Ed Sheeran’s Shape of You picked apart the track, letting me hear both Sheeran’s guitar picking and the clarity of his voice. It was like he was playing in a small café for an audience of me. The bass notes on songs like Gregory Porter’s Holding On and Ariana Grande’s Side by Side were deep and resonant.
The HomePod setup process is as easy and fast as you would expect from an Apple device. With the latest version of iOS installed on your iPhone, 11.2.5, it will recognize the HomePod as soon as you put it near it. After that, the iPhone and HomePod steer you through a handful of settings including selecting the room where you’ll place the HomePod (it can get the list from the Home app, if you’re using it). It will also, with your permission, gather connections to your lists, reminders and will transfer all your iCloud and network settings so you don’t have to do things like manually enter user names, SSIDs and passwords. HomePod even grabs your Siri settings. Like a male voice? HomePod’s Siri will speak in that same male voice.
Then you get to the Apple Music portion of setup. Since Apple Music is the only natively supported music service, it’s pretty much your only option for streaming music, unless you use the HomePod as an AirPlay-connected speaker for your phone. At least every new HomePod comes with a three-month free subscription to Apple Music.
The combination of Siri and a smart speaker is quite compelling.
Since Apple Music has access to 45 million songs you can ask it pretty much any music question and get a good answer. From playing current hits, to finding a decent 80’s channel to playing various versions of the same song. The more you use Apple Music, the more it tailors responses to your preferences. I also noticed that, even with the volume at 90 percent, the HomePod could still hear when someone said, “Hey Siri, stop.”
Image courtesy of Apple
Apple updated Siri with a full-complement of Grammy-related responses, including playlists of the nominees and, after the Grammy Awards are announced, playlists of the winners. It’s a shame that the smart speaker doesn’t ship until after the awards show airs on January 28.

Siri house smarts

HomePod’s Siri integration works just as you would expect it to. You can ask Siri the latest news and it will launch a news brief from one of your favorite sources (CNN, Fox News, NPR). The white glowing spot on top of HomePod lets you know it’s listening. It has your weather update and can tell you if you need an umbrella. Siri has access to your reminders, so you can build a shopping list by talking to Siri.
It also lets you launch scenes with phrases like, “Hey Siri, Good Morning.” In the example I saw, that phrase triggered the raising of HomeKit-compatible blinds, turning on a coffee maker and raising the temperature through a smart thermostat. I like what I saw, but I don’t think the creation of Scenes in the Home app is as straightforward as it should be. I’m hoping Apple tears down and rebuilds the Home app, so it better integrates basic functions with automation and scene-building.
HomePod is also adept at sending messages to your contacts using only your voice and reading incoming messages back to you, as well. It also handles voice calls, but only as a speaker phone that accesses your WiFi-connected iPhone (you select the audio device on your phone). The Amazon Echo, can, by contrast, make calls to other Echos and those with the Alexa app without the need for smartphone.
Since Apple doesn’t sell information or let you buy products through the HomePod, it’s not interested in your personal information. They encrypt your queries and anonymize your personal data. Apple will even let you turn off “Hey Siri” listening, which means you must touch the device to launch a request (there’s also touch for volume control and mute).
Even with all these smart and home automation features. Apple believes most people use smart speakers like the HomePod for music, which is why it’s so surprising that it won’t ship with the ability to link up two HomePods as a stereo pair. Even after the February 9 ship date, you’ll have to wait for a software update to access that feature. If you do buy one or more HomePods, though, it’ll be worth the wait. Two HomePods playing just about anything is incredible.
What Apple has here is an ultra-high-quality speaker and the first physical instantiation of Siri without a screen. The fact that Apple is finally entering the smart speaker race is cause for muted celebration. It’s attractive, sounds amazing and is an excellent Siri ambassador. And it’s $349. Is better sound and solid iOS integration (plus the added cost of an Apple Music subscription) worth spending nearly four times as much as a decent sounding Echo?
Guess we’ll have our answer when the HomePod goes on pre-order this Friday.
Clarifications (1–26–2018): The HomePod does not support calendar. In addition, the iPhone call connection is over WiFi, not Bluetooth.

Thursday, January 18, 2018

Live TV has a new home on Fire TV


“Alexa, tune to HBO Family.”

We’ve all been there, the infinite scroll. Scrolling around with no idea what to watch. Good news for the indecisive folks in the room, with the new On Now row and Channel Guide on Fire TV, it’s easier than ever to watch Live TV with Amazon Channels.
Amazon Channels is the Prime benefit that lets Prime members subscribe to over 100 channels, with no cable required, no apps to download, and can cancel anytime. Most movies and TV shows included in your subscriptions are available to watch on demand. Some channels also feature Watch Live, which gives you the option to live stream programming on supported devices the same time that it’s broadcast on TV. That means you’ll be able to watch and live tweet Westworld when everyone else is watching.

On Now ✨

Here at Fire TV, we want to make it really easy to discover the live programming available to you. If you’re signed up for HBO, SHOWTIME, STARZ, or Cinemax through Amazon Channels, you will see a new row on your homepage called On Now. That row will show you all of the programming that is live now.

On Later ⏰

In addition to this handy dandy row, you will also have the ability to look into the future 🔮. If you’re curious what’s on later today or coming up in the next two weeks, you can use the new Channel Guide to browse the entire schedule. To launch the Guide, simply press the Options button (looks like a hamburger) on the Alexa Voice Remote while watching Live TV and see your channels and all the future programming information. Don’t forget to ️favorite ⭐️ your top channels so that they show up first in your Guide. Coming up this weekend, SHOWTIME Showcase will be airing Death Becomes Her and St. Elmo’s Fire; who needs weekend plans when two of the best movies are on?!

Just say — ”Alexa, watch HBO.” 🗣️

If you already know what channel you want to watch — simply press the microphone button on your Alexa Voice Remote, or speak to your connected Echo device, and say “Alexa, watch ___”. The Live channel will instantly tune on command.
Here a few voice commands to try:
  • “Alexa, watch HBO.”
  • “Alexa, tune to HBO Family.”
  • “Alexa, go to Cinemax.”
  • “Alexa, go to SHOWTIME.”
  • “Alexa, watch STARZ.”
  • “Alexa, go to the Channel Guide.”
As always, you can ask Alexa to search for shows, movies, actors, genres and more. If you search for a show or movie that happens to be airing live, the channel will appear in the search results.
The new Live TV experience is currently available with subscriptions offered through Amazon Channels (HBO, SHOWTIME, STARZ, Cinemax) and we will be adding more channels in the near future. Start your free trial with these channels today to get started with Live TV on your Fire TV. This functionality is only available if you have an HBO, SHOWTIME, STARZ, or Cinemax subscription though Amazon Channels. If you access content from these providers through another method, you will not see an On Now row or the Channel Guide on your Fire TV. Please click here to learn more. Happy streaming!

Saturday, January 13, 2018

Facebook’s newsfeed changes: a disaster or an opportunity for news publishers?


Social media and digital executives in newsrooms already have a tough job connecting their content to consumers via social media, but Facebook’s proposed changes in the algorithms of its ‘newsfeed’ are going to make it a lot harder. Social networks offer immense opportunities for reaching vast new audiences and increasing the engagement of users with journalism. The most important platform in the world is about to make that more difficult.
Clearly, this is a blow for news publishers who have spent the last decade or so fighting a battle for survival in a world where people’s attention and advertising have shifted to other forms of content and away from news media brand’s own sites. They are clearly very concerned. Yet, could this be a wake-up call that will mean the better, most adaptive news brands benefit?
I’m not going to argue that this is good news for news publishers, but blind panic or cynical abuse of Facebook is not a sufficient response. The honest answer is that we don’t know exactly what the effect will be because Facebook, as usual, have not given out the detail and different newsrooms will be impacted differently.
It’s exactly the kind of issue we are looking at in our LSE Truth, Trust and Technology Commission. Our first consultation workshop with journalists, and related practitioners from sectors such as the platforms, is coming up in a few weeks. This issue matters not just for the news business. It is also central to the quality and accessibility of vital topical information for the public.
Here’s my first attempt to unpack some of the issues.
Mark Zuckerberg: making time on Facebook ‘well spent’
Firstly, this is not about us (journalists). Get real. Facebook is an advertising revenue generation machine. It is a public company that has a duty to maximise profits for its shareholders. It seeks people’s attention so that it can sell it to advertisers. It has a sideline in charging people to put their content on its platform, too. It is a social network, not a news-stand. It was set up to connect ‘friends’ not to inform people about current affairs. Journalism, even where shared on Facebook, is a relatively small part of its traffic.
Clearly, as Facebook has grown it has become a vital part of the global (and local) information infrastructure. Other digital intermediaries such as Google are vastly important, and other networks such as Twitter are significant. And never forget that there are some big places such as China where other similar networks dominate, not Facebook or other western companies. But in many countries and for many demographics, Facebook is the Internet, and the web is increasingly where people get their journalism. It’s a mixed and shifting picture but as the Reuters Digital News Report shows, Facebook is a critical source for news.
From Reuters Digital News Report 2017
If you read Zuckerberg’s statement he makes it clear that he is trying to make Facebook a more comfortable place to be:
“recently we’ve gotten feedback from our community that public content — posts from businesses, brands and media — is crowding out the personal moments that lead us to connect more with each other.”
His users are ‘telling him’ (i.e. fewer of them are spending less time on FB) what a plethora of recent studies and books have shown which is that using Facebook can make you miserable. News content — which is usually ‘bad’ news — doesn’t cheer people up. The angry, aggressive and divisive comment that often accompanies news content doesn’t help with the good vibes. And while the viral spread of so-called ‘fake news’ proves it is popular, it also contributes to the sense that Facebook is a place where you can’t trust the news content. Even when it is credible, it’s often designed to alarm and disturb. Not nice. And Facebook wants nice.
One response to this from journalists is despair and cynicism. The UK media analyst Adam Tinworth sums this approach up in a witty and pithy ‘translation’ of Zuckerberg’s statement:
“We can’t make money unless you keep telling us things about yourself that we can sell to advertisers. Please stop talking about news.”
Another accusation is that Facebook is making these changes because of the increasing costs it is expending at the behest of governments who are now demanding it does more to fight misinformation and offensive content. That might be a side-benefit for Facebook but I don’t think it’s a key factor. It might even be a good thing for credible news if the algorithmic changes include ways of promoting reliable content. But overall the big picture is that journalism is being de-prioritised in favour of fluffier stuff.
Even Jeff Jarvis, the US pioneer of digital journalism who has always sought to work with the grain of the platforms, admits that this is disturbing:
“I’m worried that news and media companies — convinced by Facebook (and in some cases by me) to put their content on Facebook or to pivot to video — will now see their fears about having the rug pulled out from under them realized and they will shrink back from taking journalism to the people where they are having their conversations because there is no money to be made there.”*
The Facebook changes are going to be particularly tough on news organisations that invested heavily in the ‘pivot to video’. These are often the ‘digital native’ news brands who don’t have the spread of outlets for their content that ‘legacy’ news organisations enjoy. The BBC has broadcast. The Financial Times has a newspaper. These organisations have gone ‘digital first’ but like the Economist they have a range of social media strategies. And many of them, like the New York Times, have built a subscription base. Email newsletters provide an increasingly effective by-pass for journalism to avoid the social media honey-trap. It all makes them less dependent on ‘organic’ reach through Facebook.
But Facebook will remain a major destination for news organisations to reach people. News media still needs to be part of that. As the ever-optimistic Jarvis also points out, if these changes mean that Facebook becomes a more civil place where people are more engaged, then journalism designed to fit in with that culture might thrive more:
“journalism and news clearly do have a place on Facebook. Many people learn what’s going on in the world in their conversations there and on the other social platforms. So we need to look how to create conversational news. The platforms need to help us make money that way. It’s good for everybody, especially for citizens.”
News organisations need to do more — not just because of Facebook but also on other platforms. People are increasingly turning to closed networks or channels such as Whatsapp. Again, it’s tough, but journalism needs to find new ways to be on those. I’ve written huge amounts over the last ten years urging news organisations to be more networked and to take advantage of the extraordinary connective, communicative power of platforms such as Facebook. There has been brilliant innovations by newsrooms over that period to go online, to be social and to design content to be discovered and shared through the new networks. But this latest change shows how the media environment continues to change in radical ways and so the journalism must also be reinvented.
Social media journalist Esra Dogramaci has written an excellent article on some of the detailed tactics that newsrooms can use to connect their content to users in the face of technological developments like Facebook’s algorithmic change:
“if you focus on building a relationship with your audience and developing loyalty, it doesn’t matter what the algorithm does. Your audience will seek you out, and return to you over and over again. That’s how you ‘beat’ Facebook.”
Journalism Must Change
The journalism must itself change. For example, it is clear that emotion is going to be an even bigger driver of attention on Facebook after these changes. The best journalism will continue to be factual and objective at its core — even when it is campaigning or personal. But as I have written before, a new kind of subjectivity can not only reach the hearts and minds of people on places like Facebook, but it can also build trust and understanding.
This latest change by Facebook is dramatic, but it is a response to what people ‘like’. There is a massive appetite for news — and not just because of Trump or Brexit. Demand for debate and information has never been greater or more important in people’s everyday lives. But we have to change the nature of journalism not just the distribution and discovery methods.
The media landscape is shifting to match people’s real media lives in our digital age. Another less noticed announcement from Facebook last week suggested they want to create an ecosystem for local personalised ‘news’. Facebook will use machine learning to surface news publisher content at a local level. It’s not clear how they will vet those publishers but clearly this is another opportunity for newsrooms to engage. Again, dependency on Facebook is problematic, to put it mildly, but ignoring this development is to ignore reality. The old model of a local newspaper for a local area doesn’t effectively match how citizens want their local news anymore.
What Facebook Must Do
Facebook has to pay attention to the needs of journalism and as it changes its algorithm to reduce the amount of ‘public content’ it has to work harder at prioritising quality news content. As the Guardian’s outstanding digital executive Chris Moran points out, there’s no indication from Facebook that they have factored this into the latest change:
Fighting ‘fake news’ is not just about blocking the bad stuff, it is ultimately best achieved by supporting the good content. How you do that is not a judgement Facebook can be expected or relied upon to do by itself. It needs to be much more transparent and collaborative with the news industry as it rolls out these changes in its products.
When something like Facebook gets this important to society, like any other public utility, it becomes in the public interest to make policy to maximise social benefits. This is why governments around the world are considering and even enacting legislation or regulation regarding the platforms, like Facebook. Much of this is focused on specific issues such as the spread of extremist or false and disruptive information.

Friday, January 12, 2018

How much will MVP app design cost in 2018


MVP is a great way for your app to find its early adopters, investors and even customers. But, experience has shown that raw MVP without, at least, tolerable UI and UX fails miserably. OK, but how much will MVP app design cost me? Spoiler: not much. And you will be surprised with the result.
What is the point of an MVP? To show off the core features of your app to a target audience and investors before even starting the development. In other words, to test the waters.
However, it doesn’t mean at all that you have to produce an ugly monster with absent UI. As one more crucial goal of MVP is to find your customers. Great UI in pair with convenient UX is your key to success.
But what is the cost of MVP app design? How much resources you have to spare on design purposes? Let’s find it out.

Preparations

To get more or less decent design of your MVP you can’t just draw some lines and boxes on a napkin and give it to a design company or freelancers. Actually, you can do that, but it will cost you, and a lot. We’ll talk about that further on. Now let’s get back to the point.
If you want to save time and, consequently, money it is a good idea to get prepared, prior meeting with a design agency. Wireframe and some mockups are pretty much everything you might need.
Moreover, by presenting comprehensive app wireframe and mockups, you can be sure that there won’t be any unpleasant surprises. As a hired freelancers or guys from a contracted agency will know for sure what end-result they are ought to provide.

Wireframe of the app

A skeleton of your app. That is a rough, or even drawn on a napkin (yes-yes), layout of the navigation, screens and elements in your app. It also outlines the core features of it. And the best thing is that you finally have a, more or less, complete idea of your app.
Sure thing, making a wireframe is more than DIY-appropriate. Tools like Bootstrap may come in handy here. The coolest part is, that almost none programming skills are required. Only basic knowledge of HTML and CSS. And, probably, some video guides. :)
With available templates, you’ll be capable of building a rough layout within hours. Plus it is completely free. Unless you’ll require some advanced templates. But you can always look out for those on the other platforms.
Needless to say, it will help a lot for the initial pitching session. Even if you decide to entrust all this job to an agency — some minimum wireframe would be very helpful prior approaching them.
On the average, wireframe might take 10–30 hours in development. It might cost you nothing if you’ll do it by yourself. But if you’re going to ask an agency — $500 — $3.000 would be a fair price, depending on the complexity of the app.

Mockup of the app

Mockup is what your customers and investors will see. It can make them fall in love with your app or drive them away. In a nutshell, that is an approximate final look of your app.
There is a good rule for mockup estimation. Landing page will cost you around $500. And every additional screen will, usually, cost about $50–70. Count the number of screens you are going to have. Simply add everything to get the total price. That is the most common practice how companies and freelancers usually charge for their services.
But what about DIY? Of course, if you are familiar with such great tools like Adobe Photoshop and Adobe Experience Design it won’t be a problem for you to make a simple (or brilliant, depending on your skills) mockup. Those are the most common and handy tools. And while Photoshop will cost you $10-$20 (depending on the plan), Experience Design is completely free.

Interactive mockup

Speaking of simple mockups, there is a great way to improve those — interactivity.
Interactive mockup  — is a good chance for you to improve client engagement. Customers or investors would better prefer interactive solution over a static image. One more big plus — those are easy to spread over various devices.
Tools like Framer and inVision are your best helpers here. They work pretty much like usual app building platforms. Take your mockups, drag-and-drop different elements, adjust navigation and features, et voila! Now you have it.
Interactive mockups cost just a slightly more than the usual ones. You’ll just need your usual mockups and subscription for one of those tools. Or you can give this job to the designers you’ve hired. Anyway, additional expenses won’t exceed $100-$500. But potential profit may be a lot bigger.

Total price

Those blessed ones, who chose DIY way, might pay from complete nothing to a few hundred bucks (subscriptions, paid content, etc.).
And those who decide to hire somebody, might receive a bill on $1000-$10.000. Price varies drastically because of:
  • complexity of your app
  • desired features
  • region where you hire
One more good advice. Design agencies, usually, take fixed (and pretty high) price. Freelancers or outsource companies, on the other hand, often, charge on per hour basis. So hiring few freelancers in India for $10/hour might be a good idea for your wallet. But is it so when it comes to the quality?

What voice tech means for brands


An overview of the issues around voice technology and top line considerations for brand owners.
Sony’s LF-S50G speaker with Google Assistant. Image via Sony.

Summary

Voice based technology is going to have a huge impact on many sectors, with 50% of all search forecast to be voice-based within just two years. The rate of uptake is likely to vary based on age, geography and literacy — but some markets and platforms already have high penetration, while globally 10% of search is already voice based.
There will be new winners and losers in this space, and incumbent brands will need to look at the impact of losing control of the consumer conversation during the purchase process, making it harder to stand out against their competition.
However, voice interfaces give an unprecedented opportunity for brands to interact with consumers in an extremely powerful new way, and few brands have taken advantage of this yet. Current widely-available functionality is limited in scope and very utility-focused; there are opportunities to develop innovative content and experiences as well as whole new services.
The brands that rise to the occasion are in a good position to increase their market share. Additionally, there are many tools available allowing easy experimentation with voice for minimal investment.
Our recommendation is to start a low investment program of service design and Tone of Voice experimentation as soon as possible — possibly tied in to campaign activity — in order to prepare your brand to take advantage of opportunities that this technology reveals.

Introduction

What do we mean by ‘Voice’?

In the context of this article, we mean ‘talking out loud to automated services’. This covers everything from interactive fiction to utilities, available on bespoke hardware devices, within apps on phones and in the cloud, either accessed via a branded product or one of the major players’ virtual assistants.
A lot of the hype around voice revolves around the uptake of smart speakers (75% of US households are projected to own one by 2020), and the ‘voice assistants’ that come with them. Several of these assistants now allow direct third party integration, a bit like apps on a smartphone.
In addition, it’s important to note that these and other voice assistants are available on other hardware — often phones and tablets, via apps and deep OS integrations, but also bespoke hardware devices and even websites.
In many respects the technologies underlying voice and bots are the same — but the ecosystems and impact are different enough to have made voice very much its own area.

Is voice just hype?

No. It’s true that there is a lot of hype about voice, and that it looks similar to 3D printing and other ‘technologies that will change the way we live’, but interacting with computers via voice interfaces is here to stay.
Apart from anything else there are a range of convincing statistics; for example over 20% of mobile search is already voice based and forecast to rise to 50% of all search by 2020.
Perhaps more interestingly, there are some reasons behind those statistics that might be telling.
It’s often said in technology circles that the majority of next billion people due to get online for the first time will be poorly educated and likely illiterate, as ‘underdeveloped’ nations start to get internet access. For this demographic video and voice will be paramount — and voice may be the only two-way medium available to them.
Additionally, the iPad effect revealed how even very young children could interact with a touchscreen while struggling with a mouse; voice interaction is even faster and more intuitive (once someone can talk) and will undoubtedly be the primary interaction method for some functions within a few years.
It’s also worth considering the stakes involved, especially for Google and Amazon, the biggest players in ad revenue and organic product discovery respectively. Amazon’s aggressive move into voice will already be having a noticeable effect on Google’s bottom line by moving search away from the web and Google ads’ reach— which explains why the latter is working so hard to make a success of its own Assistant.
To their advantage Google can leverage their existing 2.5Bn Android devices in the wild. With numbers that big and uptake gaining traction you can understand the predicted total of 7.5Bn installed voice assistants in operation by 2021.
Concerns about privacy and security do slow adoption in some respects, which we explore later in this article.
A common argument against voice is the social oddness or ‘embarrassment factor’ of talking out loud to a device, especially in a public place (and especially by older people — by which we mean anyone over 20 really). BBH’s view on this is that these norms are fast to change; for example a decade ago it was unthinkable to put a phone on a dinner table in most situations; these days it can be a sign of giving undivided attention (depending on nuance), or it can even be acceptable to answer a call or write a text during a meal in some circumstances.

Overview

Voice is quickly carving a space in the overall mix of technological touchpoints for products and services.
In many ways, this is not surprising; using our voices to communicate is three times faster, and significantly easier than typing. It’s so natural that it takes only 30 minutes for users to relax with this entirely new interface, despite it bringing a freight of new social norms.
There are also contexts in which voice simply beats non-voice input methods; with wet or full hands (cooking, showering), with eyes being used for something else (driving) or almost anything for those of us whose use of hands or eyes may be limited.
Cooking is an obvious example of when it’s preferable to be hands free. Image via saga.co.uk
While voice is unlikely to completely replace text in the foreseeable future, it will undoubtedly have a big impact in many technology-related fields, notably including e-commerce and search.

A brief history of voice

Automated voice-based interfaces have been around for decades now although their most influential exposure has been on customer service phone lines. Most of the systems involved have suffered from a variety of problems, from poor voice recognition to complex ecosystems.
Five years ago industry leading voice recognition was only at around 75% accuracy; recent advances in machine learning techniques, systems and hardware have increased the rate of the best systems to around 95–97%.
Approaching and crossing this cognitive threshold has been the single biggest factor in the current boom. Humans recognise spoken words with around 95% accuracy, and use context to error correct. Any automated system with a lower recognition accuracy feels frustrating to most users and isn’t therefore commercially viable.
Related developments in machine learning approaches to intent derivation (explained later in this article) are also a huge contributing factor. Commercial systems for this functionality crossed a similar threshold a couple of years ago and were responsible for the boom in bots; voice is really just bots without text.
Bots themselves have also been around for decades, but the ability to process natural language rather than simply recognising keywords has led to dialogue-based interactions, which in turn powered the recent explosion in platforms and services.

Assistants

Pre-eminent in the current voice technology landscape is the rise of virtual automated assistants. Although Siri (and other less well known alternatives) have been available for years, the rise of Alexa and Google Assistant in particular heralds a wider platform approach.
The new assistants promote whole ecosystems and function across a range of devices; Alexa can control your lights, tell you what your meetings are for the day, and help you cook a recipe. These provide opportunities for brands and new entrants alike to participate in the voice experience.

Effect on markets

A new, widely used mechanism for online commerce is always going to be hugely disruptive, and it’s currently too early to know in detail what all the effects of voice will be for brands.
Three of the biggest factors to take into account are firstly that many interactions will take place entirely on platform, reducing or removing the opportunity for search marketing. Secondly the fact that dialogue-based interactions don’t support lists of items well means that assistants will generally try to recommend a single item rather than present options to the user, and lastly that the entire purchase process will, in many cases, take place with no visual stimulus whatever.
All of these factors are currently receiving a lot of attention but it’s safe to say that the effect on (especially FMCG) brands is going to be enormous, especially when combined with other factors like Amazon’s current online dominance as both marketplace and own-brand provider.
Two strategies that are currently being discussed as possible ways to approach these new challenges are either to market to the platforms (as in, try to ensure that Amazon, Google etc. recommend your product to users), and/or to try to drastically increase brand recognition so that users ask for your product by name rather than the product category. Examples would be the way the British use ‘Hoover’ interchangeably with ‘vacuum cleaner’ or Americans using ‘Xerox’ meaning ‘to photocopy’.

Role vs other touchpoints

Over the next few years many brands will create a presence on voice platforms. This could take any form, from services providing utilities or reducing the burden on customer services, to communications and campaign entertainment.
Due to the conversational nature of voice interfaces, the lack of a guaranteed visual aspect and the role of context in sensitive communications, few or no brands will rely on voice alone; it won’t replace social, TV, print and online but rather complement these platforms.
It’s also worth noting that a small but significant part of any brand’s audience won’t be able to speak or hear; for them voice only interfaces are not accessible (although platforms such as Google Assistant also have visual interfaces).

Branding and voice

In theory voice technology gives brands an unprecedented opportunity to connect with consumers in a personal, even intimate way; of all the potential brand touchpoints, none have the potential for deep personal connection at scale that voice does.
At the same time, the existing assistant platforms all pose serious questions for brands looking to achieve an emotional connection to some extent. Google Assistant provides the richest platform opportunity for brands, but is still at one remove from ‘native’ functionality, while Alexa imposes extra limitations on brands.
Having said that, voice technology does represent an entirely new channel with some compelling brand characteristics, and despite the drawbacks may represent an important opportunity to increase brand recognition.
We’re all hardwired to see faces around us—and to make emotional connections when we talk. Image via adme.ru

Human-like characteristics

It is well established that people assign human characteristics to all their interactions, but this phenomenon is especially powerful with spoken conversations. People develop feelings for voice agents; over a third of regular users wish their assistant were human and 1 in 4 have fantasised about their assistant.
Voice-based services, for the first time, allow brands to entirely construct the characteristics of the entity that represents them. The process is both similar to and more in depth than choosing a brand spokesperson; it’s important to think about all the various aspects of the voice that represents the brand or service.
Examples of factors worth considering when designing a voice interface include the gender, ethnicity and age of the (virtual) speaker, as well as their accent. It may be possible to have multiple different voices, but that raises the question of how to choose which to use — perhaps by service offered or (if known) by customer origin or some other data points.
Another interesting factor is the virtual persona’s relationship to both the user and the brand; is the agent like a host? An advisor? Perhaps a family member? Does it represent the brand itself? Or does it talk about the brand in the third person? Does it say things like “I’ll just check that for you”, implying access to the brand’s core services that’s distinct from the agent itself?
There are of course technical considerations to take into account; depending on the service you create and the platform it lives on it may not be possible to create a bespoke voice at all, or there may be limits on the customisation possible. This is explored in more detail below.
In some cases, it may even be possible to explore factors that are richer still; such as the timbre of the voice and ‘soft’ aspects like the warmth that the speech is delivered with.
Lastly, it’s worth noting that voice bots have two way conversations with individual users that are entirely brand mediated; there is no human in the conversation who may be having a bad day or be feeling tired.

Tone of Voice in bot conversations

Tone of Voice documents and editorial guides are generally written to support broadcast media; even as they have become more detailed to inform social media posting, guides often focus on crafted headline messages.
Conversational interfaces push the bounds of those documents further than ever before, for a few reasons.
Firstly, voice agents will typically play a role that is closer to the pure brand world than either sales or support; entertainment and other marketing activities make the role of an agent often closer to a social media presence than a real human, but with a human-like conversational touchpoint.
Secondly, both bots and voice agents have two way conversations with customers. In a sense this is no different than sales or customer service (human) agents, but psychologically speaking those conversations are with a human first and a brand representative second.
In a conversation with a customer services representative, for example, any perceptions the consumer has about the brand are to some extent separate from the perceptions about the human they are interacting with.
Lastly, it’s critical to note that users will feel empowered to test the boundaries of an automated agent’s conversation more than they would a human, and will naturally test and experiment.
Expect users to ask searching questions about the brand’s competitors or the latest less-than-ideal press coverage. If users are comfortable with the agent, expect them to ask questions unrelated to your service, or even to talk about their feelings and wishes. Even in the normal course of events, voice interactions will yield some unusual and new situations for brands. For example, this commenter on a New York Times article was interrupted mid sentence, causing a brief stir and a lot of amusement.
How voice agents deal with the wide range of new input comes down not only to the information the agent can respond to, but more importantly the way in which it responds. To some extent this is the realm of UX writing, but hugely important in this is the brand voice.
As an example, if you ask Google Assitant what it thinks of Siri (many users’ first question), it might reply “You know Siri too?! What a small world — hope they’re doing well”.

Service design for voice

Whether based in utility, entertainment, or something else, some core considerations come into play when building a voice-based service. It’s not uncommon for these factors to lead to entirely new services being built for brands.
Obviously it’s important to consider the impact that not having a screen will have on the experience. As an example, lists of results are notoriously bad over a voice interface; as an experiment read the first page of a Google search results out loud. This means that experiences tend to be more “guided” and rely less on the user to select an option — although there are also lots of other implications.
With that in mind, it’s also good to note that increasingly voice platform users may have screens that both they and the assistant can access; either built into the device (like with Echo Show) or via smartphone or ecosystem-wide screens such as with the Google Assistant. While these screens can’t be counted upon, they can be used to enrich experiences where available.
Another important factor is the conversational nature of the interface; this has a huge impact on the detail of the service design but can also mean selecting services with a high ratio of content to choices, or at least where a linear journey through the decision matrix would make sense. Interfaces of this sort are often hugely advantageous for complex processes where screen-based interfaces tend to get cluttered and confusing.
Finally, as with social, context is massively important to the way users access a voice service. If they are using a phone they may be in public or at home, they may be rushed or relaxed, and all these affect the service. If the user is accessing the service via a smart speaker they are likely at home but there may be other people present; again affecting the detail of the service.
In general, services well suited to voice will often be limited in scope and be able to reward users with very little interaction; more complex existing services will often need AI tools to further simplify their access before being suitable to voice.

The voice landscape

In the last two to three years the landscape of voice technology has shifted dramatically as underlying technologies have reached important thresholds. From Google and Amazon to IBM and Samsung, many large technology companies seem to have an offering in the voice area, but the services each offers differ wildly.

Devices and Contexts

It’s important to note that many devices do have capabilities beyond voice alone. Smart speakers generally are only voice, but also have lights that indicate to users when they are listening and responding, and so help to direct the conversation.
Newer Alexa devices like the Echo Show and Echo Spot are now shipping with screens and cameras built in, while Google Assistant is most commonly used on smartphones where a screen mirrors the conversation using text, by default. On smartphones and some other devices users have the option to have the entire dialogue via text instead of voice, which can make a difference to the type of input they receive as well as the nuances available in the output.
Screen based conversational interfaces are developing rapidly to also include interactive modules such as lists, slideshows, buttons and payment interfaces. Soon voice controlled assistants will also be able to use nearby connected TVs to supplement conversational interfaces, although what’s appropriate to show here will differ from smartphone interfaces.
As should be clear, as well as a wide range of available capabilities, the other major factor affecting voice interactions is context; users may be on an individual personal device or in a shared communal space like a kitchen or office; this affects how they will be comfortable interacting.

Platforms and ecosystems

Amazon Echo speakers feature Alexa

Amazon Alexa

Perhaps the most prominent UK/US based voice service is Amazon’s Alexa: initially accessible via Echo devices but increasingly available in hardware both from Amazon and third parties.
Amazon has a considerable first mover advantage in the market (72% smart speaker market share), and it’s arguably the commercial success of the range of Echo devices that has kick-started the recent surge in offerings from other companies.
Alexa is a consumer facing platform that allows brands to create ‘skills’ that consumers can install. End users configure Alexa via a companion app; among other things this allows them to install third party ‘skills’ from an app store. An installed skill allows the end user to ask Alexa specific extra questions that expose the skill’s service offering; e.g. “Alexa, what’s my bank balance?”
There are now approximately 20,000 Alexa skills across all markets, up from 6,000 at the end of 2016. Although many have extremely low usage rates at present, Amazon has recently introduced funding models to continue to motivate third party developers to join its ecosystem.
With an estimated 32M Alexa-powered devices sold by the end of 2017 (of which around 20M in Q4) there’s no doubt that the platform has a lot of reach, but Alexa’s skills model and Amazon’s overall marketplace strategy combine to place brands very much in Amazon’s control.
Google Home features the Assistant. Image via google.com

Google Assistant

Google launched the Home device, powered by the Google Assistant in May 2016, over a year after Amazon launched the Echo. Google has been aggressively marketing the Assistant (and Home hardware devices) both to consumers and to partners and brands. Google already commands a market share (of smart speakers) of 15%, double that of the previous year; their market share of smartphone voice assistants is 46%, projected to rise to 60% by 2022.
Google’s Assistant is also being updated with new features at an incredible rate, and arguably has now taken the lead in terms of functionality provided to users and third party developers.
Perhaps most interestingly, Assistant takes an interesting and different approach to brand integration compared to other offerings, with the Actions on Google platform. Using this platform, brands are able to develop not only the service offering but the entire conversational interface, including the voice output of their service.
Users don’t need to install third party apps but can simply ask to speak to them; much the way someone might ask a switchboard or receptionist to speak to a particular person. Once speaking to a particular app, users can authenticate, allow notifications, switch devices and pay, all through the Assistant’s conversation based voice interface.
By integrating Assistant tightly with Android, the potential reach of the platform is enormous; there are currently 2.5Bn Android devices in operation. The software is also available to third party hardware manufacturers, further increasing the potential of the ecosystem.
Cortana doesn’t have a dedicated device but is available on Windows and Xbox devices. Image via Wallpaperden

Microsoft Cortana

Microsoft’s Cortana is installed on every Windows 10 device and has an impressive 145M monthly active users (probably mostly via XBox), but is currently less heavily promoted and updated than the offerings from Google and Amazon.
Cortana provides a similar ‘skill’ interface to Alexa, but has started developing this relatively late and is playing catch-up both in terms of core functionality and the number of available integrations.
Microsoft’s huge overall user base and its dominance in both work-related software and gaming ecosystems do give Cortana a powerful (and growing) presence in the market, despite its share of dedicated smart speaker devices being small.
Baidu’s Raven speakers are the company’s first foray into dedicated hardware for its well-known voice services. Image via Slate

Baidu

Baidu (often called the ‘Chinese Google’) arguably started the recent trend for voice interfaces with a combination of groundbreaking technology and a huge installed user base with various cultural and socioeconomic predispositions to favouring voice over text.
Baidu recently released DuerOS, a platform for third party hardware developers to build their own voice powered devices, and via the ‘Baidu Brain’ offers a suite of AI platforms for various purposes (many involving voice).
Most consumers currently interact with Baidu’s voice technologies via their Chinese language dedicated services (i.e. without any third party integrations).

Siri, Bixby and Watson

Apple’s Siri and Samsung’s Bixby are both voice assistants that currently only work on a given device or perhaps in the manufacturer’s ecosystem; neither could be called a platform as they don’t offer third parties access to create services.
Both have reasonable market share due to the number of phones they appear on, but their gated offerings and lower accuracy voice recognition now make them seem limited by comparison with other assistants.
IBM’s Watson is perhaps most usefully seen as a suite of tools that brands can use to create bespoke services.

Content and services

There are a lot of considerations when designing services for voice based conversational interfaces; these are touched on above but affect the range of functionality that is available.

— Utility

The vast majority of voice services currently available are utilities, giving access to a simple piece of functionality already available via other methods. These range from the more mundane (playing a specific radio station or listening to news) to the more futuristic (adjusting the lights or playing a specific film on the TV), via provider-specific functions like ordering a pizza or a taxi.
Lots of brands are beginning to offer services in this area, from home automation or similar niche organisations like WeMo and Plex or Philips Hue, to more widely used services like Uber and Dominos, but interestingly also including big brands offering innovative services. Both Mercedes and Hyundai, for example, allow users to start their cars and prewarm them from various voice assistant platforms.

— Entertainment

Various games, jokes and sound libraries are available on all the major platforms from a variety of providers, often either the platform provider themselves (i.e. Google or Amazon) or small companies or individual developers.
A few brands are starting to experiment more with the possibilities of the platform however; for example Netflix and Google released a companion experience for Season 2 of Stranger Things, and the BBC recently created a piece of interactive fiction for the Alexa.
The potential for entertainment pieces in this area is largely untapped; it is only just beginning to be explored.

Tools

Many sets of tools exist for building voice services, as well as related (usually AI based) functionality. By and large the cloud based services on offer are free or cheap, and easy to use. Serious projects may require bespoke solutions developed in house but that is unnecessary for the majority of requirements.
A full rundown of all the tools available is outside the scope of this article, but notable sets are IBM’s Watson Services, Google’s Speech API and DialogFlow, and Microsoft’s Cognitive Services.
All these mean that prototyping and experimentation can be done quickly and cheaply and production-ready applications can be costed on a usage model, which is very cost effective at small scale.

— Speech Generation

Of particular note to brands are the options around speech generation, as these are literally the part of the brand that end users interact with.
If the service being offered has a static, finite set out possible responses to all user input, it is possible to use recorded speech. This approach can be extended in some cases with a record-and-stitch-together approach such as used by TfL.
For services with a wide range of outputs, generated voices are the only practical way to go, but even here there are multiple options. There are multiple free, more-or-less “computer”-sounding voices easily available, but we would recommend exploring approaches using voice actors to create satnav-like TTS system.
The rapidly advancing field of Machine Learning powered generated speech that can sound very real and even like specific people is worth keeping an eye on; this is not yet generally available but Google is already using Wavenet for Assistant in the US while Adobe was working on a similar project.

The technology behind voice

What people refer to as voice is really a set of different technologies all working together.
Notably, Speech To Text is the ‘voice recognition’ component that processes some audio and outputs written text. This field has improved in leaps and bounds in recent years, to the point where some systems are now better at this than humans, across a range of conditions.
In June, Google’s system was reported to have 95% accuracy (the same as humans, and an improvement of 20% over 4 years), while Baidu is usually rated as having the most accurate system of all with over 97%.
The core of each specific service lies in Intent Derivation, the set of technologies based on working out what a piece of text implies the underlying user intent is — this matches user requests with responses the service is able to provide.
The recent rise in the number (and hype) of bots and bot platforms is related to this technology, and as almost all voice systems are really just bots with voice recognition added in, this technology is crucial. There are many platforms that provide this functionality (notably IBM Watson, and the free DialogFlow, among many others).
The other important set of voice-related technologies revolve around Speech Generation. There are many ways to achieve this and the options are very closely related to the functionality of the specific voice service.
The tools and options relating to this are explored earlier in this article, but they range widely in cost and quality, based on the scope of the service and the type of output that can be given to users.

Considerations

Creating a voice-first service involves additional considerations as compared to other digital services.
First and foremost, user privacy is getting increased attention as audio recordings of users are sent to the platform and/or brand and often stored there. Depending on the manner in which the service is available to users this may be an issue just for the platform involved, or may be something the brand needs to address directly.
Recently the C4 show ‘Celebrity Hunted’ caused a bit of a backlash against Alexa as users saw first hand the power of the stored recordings being available in the future. There are also worries about the ‘always on’ potential of the recording, despite major platforms repeatedly trying to assure users that only phrases starting with the keyword get recorded and sent to the cloud.
As with most things however, a reasonable value exchange is the safest way to proceed. Essentially, ensure that the offering is useful or entertaining.
A phone on a dinner table is a lot more socially acceptable than it was a few years ago. Talking out loud to devices will go the same way. Image via musely.com
Another consideration, as touched upon earlier in this article, is that the right service for a voice-first interface may not be something your brand already offers — or at the least that the service may need adaptation to be completely right for the format. We’ve found during workshops that the most interesting use cases for branded voice services often require branching out into whole new areas.
Perhaps most interestingly, this area allows for a whole new interesting set of data to be collected about users of the service — actual audio recordings aside, novel services being used in new contexts (at home without a device in hand, multiuser, etc) should lead to interesting new insights.

Recommendations for brands

We believe that long term, many brands will benefit from having some or all of their core digital services available over voice interfaces, and that the recent proliferation of the technology has created opportunities in the short and medium terms as well.
A good starting point is to start to include voice platforms in the mix for any long term planning involving digital services.
Ideally brands should start to work on an overall voice (or agent, including bots) strategy for the long term. This would encompass which services might best be offered in these different media, and how they may interact with customer services, CRM, social and advertising functions as well as a roadmap to measure progress against.
In the short term, we believe brands ought to experiment using off-the-shelf tools to rapidly prototype and even to create short-lived productions, perhaps related to campaigns.
The key area to focus on for these experiments should be how the overall brand style, tone of voice, and customer service scripts convert into a voice persona, and how users respond to variations in this persona.
This experimentation can be combined with lightweight voice-first service design in service of campaigns, but used to build an overall set of guides and learnings that can be used for future core brand services.

Interested for our works and services?
Get more of our update !