Programming & IT Tricks . Theme images by MichaelJay. Powered by Blogger.

Copyright

Facebook

Post Top Ad

Search This Blog

Post Top Ad

Responsive Ads Here

Archive

Post Top Ad

Contact


Editors Picks

Follow us

Post Top Ad

Fashion

Music

News

Sports

Food

Technology

Featured

Videos

Fashion

Technology

Fashion

Label

Translate

About

Translate

Sponsor

test

Weekly

Comments

Recent

Connect With us

Over 600,000+ Readers Get fresh content from FastBlog

About

Showing posts with label Google Street. Show all posts
Showing posts with label Google Street. Show all posts

Friday, January 12, 2018

What voice tech means for brands


An overview of the issues around voice technology and top line considerations for brand owners.
Sony’s LF-S50G speaker with Google Assistant. Image via Sony.

Summary

Voice based technology is going to have a huge impact on many sectors, with 50% of all search forecast to be voice-based within just two years. The rate of uptake is likely to vary based on age, geography and literacy — but some markets and platforms already have high penetration, while globally 10% of search is already voice based.
There will be new winners and losers in this space, and incumbent brands will need to look at the impact of losing control of the consumer conversation during the purchase process, making it harder to stand out against their competition.
However, voice interfaces give an unprecedented opportunity for brands to interact with consumers in an extremely powerful new way, and few brands have taken advantage of this yet. Current widely-available functionality is limited in scope and very utility-focused; there are opportunities to develop innovative content and experiences as well as whole new services.
The brands that rise to the occasion are in a good position to increase their market share. Additionally, there are many tools available allowing easy experimentation with voice for minimal investment.
Our recommendation is to start a low investment program of service design and Tone of Voice experimentation as soon as possible — possibly tied in to campaign activity — in order to prepare your brand to take advantage of opportunities that this technology reveals.

Introduction

What do we mean by ‘Voice’?

In the context of this article, we mean ‘talking out loud to automated services’. This covers everything from interactive fiction to utilities, available on bespoke hardware devices, within apps on phones and in the cloud, either accessed via a branded product or one of the major players’ virtual assistants.
A lot of the hype around voice revolves around the uptake of smart speakers (75% of US households are projected to own one by 2020), and the ‘voice assistants’ that come with them. Several of these assistants now allow direct third party integration, a bit like apps on a smartphone.
In addition, it’s important to note that these and other voice assistants are available on other hardware — often phones and tablets, via apps and deep OS integrations, but also bespoke hardware devices and even websites.
In many respects the technologies underlying voice and bots are the same — but the ecosystems and impact are different enough to have made voice very much its own area.

Is voice just hype?

No. It’s true that there is a lot of hype about voice, and that it looks similar to 3D printing and other ‘technologies that will change the way we live’, but interacting with computers via voice interfaces is here to stay.
Apart from anything else there are a range of convincing statistics; for example over 20% of mobile search is already voice based and forecast to rise to 50% of all search by 2020.
Perhaps more interestingly, there are some reasons behind those statistics that might be telling.
It’s often said in technology circles that the majority of next billion people due to get online for the first time will be poorly educated and likely illiterate, as ‘underdeveloped’ nations start to get internet access. For this demographic video and voice will be paramount — and voice may be the only two-way medium available to them.
Additionally, the iPad effect revealed how even very young children could interact with a touchscreen while struggling with a mouse; voice interaction is even faster and more intuitive (once someone can talk) and will undoubtedly be the primary interaction method for some functions within a few years.
It’s also worth considering the stakes involved, especially for Google and Amazon, the biggest players in ad revenue and organic product discovery respectively. Amazon’s aggressive move into voice will already be having a noticeable effect on Google’s bottom line by moving search away from the web and Google ads’ reach— which explains why the latter is working so hard to make a success of its own Assistant.
To their advantage Google can leverage their existing 2.5Bn Android devices in the wild. With numbers that big and uptake gaining traction you can understand the predicted total of 7.5Bn installed voice assistants in operation by 2021.
Concerns about privacy and security do slow adoption in some respects, which we explore later in this article.
A common argument against voice is the social oddness or ‘embarrassment factor’ of talking out loud to a device, especially in a public place (and especially by older people — by which we mean anyone over 20 really). BBH’s view on this is that these norms are fast to change; for example a decade ago it was unthinkable to put a phone on a dinner table in most situations; these days it can be a sign of giving undivided attention (depending on nuance), or it can even be acceptable to answer a call or write a text during a meal in some circumstances.

Overview

Voice is quickly carving a space in the overall mix of technological touchpoints for products and services.
In many ways, this is not surprising; using our voices to communicate is three times faster, and significantly easier than typing. It’s so natural that it takes only 30 minutes for users to relax with this entirely new interface, despite it bringing a freight of new social norms.
There are also contexts in which voice simply beats non-voice input methods; with wet or full hands (cooking, showering), with eyes being used for something else (driving) or almost anything for those of us whose use of hands or eyes may be limited.
Cooking is an obvious example of when it’s preferable to be hands free. Image via saga.co.uk
While voice is unlikely to completely replace text in the foreseeable future, it will undoubtedly have a big impact in many technology-related fields, notably including e-commerce and search.

A brief history of voice

Automated voice-based interfaces have been around for decades now although their most influential exposure has been on customer service phone lines. Most of the systems involved have suffered from a variety of problems, from poor voice recognition to complex ecosystems.
Five years ago industry leading voice recognition was only at around 75% accuracy; recent advances in machine learning techniques, systems and hardware have increased the rate of the best systems to around 95–97%.
Approaching and crossing this cognitive threshold has been the single biggest factor in the current boom. Humans recognise spoken words with around 95% accuracy, and use context to error correct. Any automated system with a lower recognition accuracy feels frustrating to most users and isn’t therefore commercially viable.
Related developments in machine learning approaches to intent derivation (explained later in this article) are also a huge contributing factor. Commercial systems for this functionality crossed a similar threshold a couple of years ago and were responsible for the boom in bots; voice is really just bots without text.
Bots themselves have also been around for decades, but the ability to process natural language rather than simply recognising keywords has led to dialogue-based interactions, which in turn powered the recent explosion in platforms and services.

Assistants

Pre-eminent in the current voice technology landscape is the rise of virtual automated assistants. Although Siri (and other less well known alternatives) have been available for years, the rise of Alexa and Google Assistant in particular heralds a wider platform approach.
The new assistants promote whole ecosystems and function across a range of devices; Alexa can control your lights, tell you what your meetings are for the day, and help you cook a recipe. These provide opportunities for brands and new entrants alike to participate in the voice experience.

Effect on markets

A new, widely used mechanism for online commerce is always going to be hugely disruptive, and it’s currently too early to know in detail what all the effects of voice will be for brands.
Three of the biggest factors to take into account are firstly that many interactions will take place entirely on platform, reducing or removing the opportunity for search marketing. Secondly the fact that dialogue-based interactions don’t support lists of items well means that assistants will generally try to recommend a single item rather than present options to the user, and lastly that the entire purchase process will, in many cases, take place with no visual stimulus whatever.
All of these factors are currently receiving a lot of attention but it’s safe to say that the effect on (especially FMCG) brands is going to be enormous, especially when combined with other factors like Amazon’s current online dominance as both marketplace and own-brand provider.
Two strategies that are currently being discussed as possible ways to approach these new challenges are either to market to the platforms (as in, try to ensure that Amazon, Google etc. recommend your product to users), and/or to try to drastically increase brand recognition so that users ask for your product by name rather than the product category. Examples would be the way the British use ‘Hoover’ interchangeably with ‘vacuum cleaner’ or Americans using ‘Xerox’ meaning ‘to photocopy’.

Role vs other touchpoints

Over the next few years many brands will create a presence on voice platforms. This could take any form, from services providing utilities or reducing the burden on customer services, to communications and campaign entertainment.
Due to the conversational nature of voice interfaces, the lack of a guaranteed visual aspect and the role of context in sensitive communications, few or no brands will rely on voice alone; it won’t replace social, TV, print and online but rather complement these platforms.
It’s also worth noting that a small but significant part of any brand’s audience won’t be able to speak or hear; for them voice only interfaces are not accessible (although platforms such as Google Assistant also have visual interfaces).

Branding and voice

In theory voice technology gives brands an unprecedented opportunity to connect with consumers in a personal, even intimate way; of all the potential brand touchpoints, none have the potential for deep personal connection at scale that voice does.
At the same time, the existing assistant platforms all pose serious questions for brands looking to achieve an emotional connection to some extent. Google Assistant provides the richest platform opportunity for brands, but is still at one remove from ‘native’ functionality, while Alexa imposes extra limitations on brands.
Having said that, voice technology does represent an entirely new channel with some compelling brand characteristics, and despite the drawbacks may represent an important opportunity to increase brand recognition.
We’re all hardwired to see faces around us—and to make emotional connections when we talk. Image via adme.ru

Human-like characteristics

It is well established that people assign human characteristics to all their interactions, but this phenomenon is especially powerful with spoken conversations. People develop feelings for voice agents; over a third of regular users wish their assistant were human and 1 in 4 have fantasised about their assistant.
Voice-based services, for the first time, allow brands to entirely construct the characteristics of the entity that represents them. The process is both similar to and more in depth than choosing a brand spokesperson; it’s important to think about all the various aspects of the voice that represents the brand or service.
Examples of factors worth considering when designing a voice interface include the gender, ethnicity and age of the (virtual) speaker, as well as their accent. It may be possible to have multiple different voices, but that raises the question of how to choose which to use — perhaps by service offered or (if known) by customer origin or some other data points.
Another interesting factor is the virtual persona’s relationship to both the user and the brand; is the agent like a host? An advisor? Perhaps a family member? Does it represent the brand itself? Or does it talk about the brand in the third person? Does it say things like “I’ll just check that for you”, implying access to the brand’s core services that’s distinct from the agent itself?
There are of course technical considerations to take into account; depending on the service you create and the platform it lives on it may not be possible to create a bespoke voice at all, or there may be limits on the customisation possible. This is explored in more detail below.
In some cases, it may even be possible to explore factors that are richer still; such as the timbre of the voice and ‘soft’ aspects like the warmth that the speech is delivered with.
Lastly, it’s worth noting that voice bots have two way conversations with individual users that are entirely brand mediated; there is no human in the conversation who may be having a bad day or be feeling tired.

Tone of Voice in bot conversations

Tone of Voice documents and editorial guides are generally written to support broadcast media; even as they have become more detailed to inform social media posting, guides often focus on crafted headline messages.
Conversational interfaces push the bounds of those documents further than ever before, for a few reasons.
Firstly, voice agents will typically play a role that is closer to the pure brand world than either sales or support; entertainment and other marketing activities make the role of an agent often closer to a social media presence than a real human, but with a human-like conversational touchpoint.
Secondly, both bots and voice agents have two way conversations with customers. In a sense this is no different than sales or customer service (human) agents, but psychologically speaking those conversations are with a human first and a brand representative second.
In a conversation with a customer services representative, for example, any perceptions the consumer has about the brand are to some extent separate from the perceptions about the human they are interacting with.
Lastly, it’s critical to note that users will feel empowered to test the boundaries of an automated agent’s conversation more than they would a human, and will naturally test and experiment.
Expect users to ask searching questions about the brand’s competitors or the latest less-than-ideal press coverage. If users are comfortable with the agent, expect them to ask questions unrelated to your service, or even to talk about their feelings and wishes. Even in the normal course of events, voice interactions will yield some unusual and new situations for brands. For example, this commenter on a New York Times article was interrupted mid sentence, causing a brief stir and a lot of amusement.
How voice agents deal with the wide range of new input comes down not only to the information the agent can respond to, but more importantly the way in which it responds. To some extent this is the realm of UX writing, but hugely important in this is the brand voice.
As an example, if you ask Google Assitant what it thinks of Siri (many users’ first question), it might reply “You know Siri too?! What a small world — hope they’re doing well”.

Service design for voice

Whether based in utility, entertainment, or something else, some core considerations come into play when building a voice-based service. It’s not uncommon for these factors to lead to entirely new services being built for brands.
Obviously it’s important to consider the impact that not having a screen will have on the experience. As an example, lists of results are notoriously bad over a voice interface; as an experiment read the first page of a Google search results out loud. This means that experiences tend to be more “guided” and rely less on the user to select an option — although there are also lots of other implications.
With that in mind, it’s also good to note that increasingly voice platform users may have screens that both they and the assistant can access; either built into the device (like with Echo Show) or via smartphone or ecosystem-wide screens such as with the Google Assistant. While these screens can’t be counted upon, they can be used to enrich experiences where available.
Another important factor is the conversational nature of the interface; this has a huge impact on the detail of the service design but can also mean selecting services with a high ratio of content to choices, or at least where a linear journey through the decision matrix would make sense. Interfaces of this sort are often hugely advantageous for complex processes where screen-based interfaces tend to get cluttered and confusing.
Finally, as with social, context is massively important to the way users access a voice service. If they are using a phone they may be in public or at home, they may be rushed or relaxed, and all these affect the service. If the user is accessing the service via a smart speaker they are likely at home but there may be other people present; again affecting the detail of the service.
In general, services well suited to voice will often be limited in scope and be able to reward users with very little interaction; more complex existing services will often need AI tools to further simplify their access before being suitable to voice.

The voice landscape

In the last two to three years the landscape of voice technology has shifted dramatically as underlying technologies have reached important thresholds. From Google and Amazon to IBM and Samsung, many large technology companies seem to have an offering in the voice area, but the services each offers differ wildly.

Devices and Contexts

It’s important to note that many devices do have capabilities beyond voice alone. Smart speakers generally are only voice, but also have lights that indicate to users when they are listening and responding, and so help to direct the conversation.
Newer Alexa devices like the Echo Show and Echo Spot are now shipping with screens and cameras built in, while Google Assistant is most commonly used on smartphones where a screen mirrors the conversation using text, by default. On smartphones and some other devices users have the option to have the entire dialogue via text instead of voice, which can make a difference to the type of input they receive as well as the nuances available in the output.
Screen based conversational interfaces are developing rapidly to also include interactive modules such as lists, slideshows, buttons and payment interfaces. Soon voice controlled assistants will also be able to use nearby connected TVs to supplement conversational interfaces, although what’s appropriate to show here will differ from smartphone interfaces.
As should be clear, as well as a wide range of available capabilities, the other major factor affecting voice interactions is context; users may be on an individual personal device or in a shared communal space like a kitchen or office; this affects how they will be comfortable interacting.

Platforms and ecosystems

Amazon Echo speakers feature Alexa

Amazon Alexa

Perhaps the most prominent UK/US based voice service is Amazon’s Alexa: initially accessible via Echo devices but increasingly available in hardware both from Amazon and third parties.
Amazon has a considerable first mover advantage in the market (72% smart speaker market share), and it’s arguably the commercial success of the range of Echo devices that has kick-started the recent surge in offerings from other companies.
Alexa is a consumer facing platform that allows brands to create ‘skills’ that consumers can install. End users configure Alexa via a companion app; among other things this allows them to install third party ‘skills’ from an app store. An installed skill allows the end user to ask Alexa specific extra questions that expose the skill’s service offering; e.g. “Alexa, what’s my bank balance?”
There are now approximately 20,000 Alexa skills across all markets, up from 6,000 at the end of 2016. Although many have extremely low usage rates at present, Amazon has recently introduced funding models to continue to motivate third party developers to join its ecosystem.
With an estimated 32M Alexa-powered devices sold by the end of 2017 (of which around 20M in Q4) there’s no doubt that the platform has a lot of reach, but Alexa’s skills model and Amazon’s overall marketplace strategy combine to place brands very much in Amazon’s control.
Google Home features the Assistant. Image via google.com

Google Assistant

Google launched the Home device, powered by the Google Assistant in May 2016, over a year after Amazon launched the Echo. Google has been aggressively marketing the Assistant (and Home hardware devices) both to consumers and to partners and brands. Google already commands a market share (of smart speakers) of 15%, double that of the previous year; their market share of smartphone voice assistants is 46%, projected to rise to 60% by 2022.
Google’s Assistant is also being updated with new features at an incredible rate, and arguably has now taken the lead in terms of functionality provided to users and third party developers.
Perhaps most interestingly, Assistant takes an interesting and different approach to brand integration compared to other offerings, with the Actions on Google platform. Using this platform, brands are able to develop not only the service offering but the entire conversational interface, including the voice output of their service.
Users don’t need to install third party apps but can simply ask to speak to them; much the way someone might ask a switchboard or receptionist to speak to a particular person. Once speaking to a particular app, users can authenticate, allow notifications, switch devices and pay, all through the Assistant’s conversation based voice interface.
By integrating Assistant tightly with Android, the potential reach of the platform is enormous; there are currently 2.5Bn Android devices in operation. The software is also available to third party hardware manufacturers, further increasing the potential of the ecosystem.
Cortana doesn’t have a dedicated device but is available on Windows and Xbox devices. Image via Wallpaperden

Microsoft Cortana

Microsoft’s Cortana is installed on every Windows 10 device and has an impressive 145M monthly active users (probably mostly via XBox), but is currently less heavily promoted and updated than the offerings from Google and Amazon.
Cortana provides a similar ‘skill’ interface to Alexa, but has started developing this relatively late and is playing catch-up both in terms of core functionality and the number of available integrations.
Microsoft’s huge overall user base and its dominance in both work-related software and gaming ecosystems do give Cortana a powerful (and growing) presence in the market, despite its share of dedicated smart speaker devices being small.
Baidu’s Raven speakers are the company’s first foray into dedicated hardware for its well-known voice services. Image via Slate

Baidu

Baidu (often called the ‘Chinese Google’) arguably started the recent trend for voice interfaces with a combination of groundbreaking technology and a huge installed user base with various cultural and socioeconomic predispositions to favouring voice over text.
Baidu recently released DuerOS, a platform for third party hardware developers to build their own voice powered devices, and via the ‘Baidu Brain’ offers a suite of AI platforms for various purposes (many involving voice).
Most consumers currently interact with Baidu’s voice technologies via their Chinese language dedicated services (i.e. without any third party integrations).

Siri, Bixby and Watson

Apple’s Siri and Samsung’s Bixby are both voice assistants that currently only work on a given device or perhaps in the manufacturer’s ecosystem; neither could be called a platform as they don’t offer third parties access to create services.
Both have reasonable market share due to the number of phones they appear on, but their gated offerings and lower accuracy voice recognition now make them seem limited by comparison with other assistants.
IBM’s Watson is perhaps most usefully seen as a suite of tools that brands can use to create bespoke services.

Content and services

There are a lot of considerations when designing services for voice based conversational interfaces; these are touched on above but affect the range of functionality that is available.

— Utility

The vast majority of voice services currently available are utilities, giving access to a simple piece of functionality already available via other methods. These range from the more mundane (playing a specific radio station or listening to news) to the more futuristic (adjusting the lights or playing a specific film on the TV), via provider-specific functions like ordering a pizza or a taxi.
Lots of brands are beginning to offer services in this area, from home automation or similar niche organisations like WeMo and Plex or Philips Hue, to more widely used services like Uber and Dominos, but interestingly also including big brands offering innovative services. Both Mercedes and Hyundai, for example, allow users to start their cars and prewarm them from various voice assistant platforms.

— Entertainment

Various games, jokes and sound libraries are available on all the major platforms from a variety of providers, often either the platform provider themselves (i.e. Google or Amazon) or small companies or individual developers.
A few brands are starting to experiment more with the possibilities of the platform however; for example Netflix and Google released a companion experience for Season 2 of Stranger Things, and the BBC recently created a piece of interactive fiction for the Alexa.
The potential for entertainment pieces in this area is largely untapped; it is only just beginning to be explored.

Tools

Many sets of tools exist for building voice services, as well as related (usually AI based) functionality. By and large the cloud based services on offer are free or cheap, and easy to use. Serious projects may require bespoke solutions developed in house but that is unnecessary for the majority of requirements.
A full rundown of all the tools available is outside the scope of this article, but notable sets are IBM’s Watson Services, Google’s Speech API and DialogFlow, and Microsoft’s Cognitive Services.
All these mean that prototyping and experimentation can be done quickly and cheaply and production-ready applications can be costed on a usage model, which is very cost effective at small scale.

— Speech Generation

Of particular note to brands are the options around speech generation, as these are literally the part of the brand that end users interact with.
If the service being offered has a static, finite set out possible responses to all user input, it is possible to use recorded speech. This approach can be extended in some cases with a record-and-stitch-together approach such as used by TfL.
For services with a wide range of outputs, generated voices are the only practical way to go, but even here there are multiple options. There are multiple free, more-or-less “computer”-sounding voices easily available, but we would recommend exploring approaches using voice actors to create satnav-like TTS system.
The rapidly advancing field of Machine Learning powered generated speech that can sound very real and even like specific people is worth keeping an eye on; this is not yet generally available but Google is already using Wavenet for Assistant in the US while Adobe was working on a similar project.

The technology behind voice

What people refer to as voice is really a set of different technologies all working together.
Notably, Speech To Text is the ‘voice recognition’ component that processes some audio and outputs written text. This field has improved in leaps and bounds in recent years, to the point where some systems are now better at this than humans, across a range of conditions.
In June, Google’s system was reported to have 95% accuracy (the same as humans, and an improvement of 20% over 4 years), while Baidu is usually rated as having the most accurate system of all with over 97%.
The core of each specific service lies in Intent Derivation, the set of technologies based on working out what a piece of text implies the underlying user intent is — this matches user requests with responses the service is able to provide.
The recent rise in the number (and hype) of bots and bot platforms is related to this technology, and as almost all voice systems are really just bots with voice recognition added in, this technology is crucial. There are many platforms that provide this functionality (notably IBM Watson, and the free DialogFlow, among many others).
The other important set of voice-related technologies revolve around Speech Generation. There are many ways to achieve this and the options are very closely related to the functionality of the specific voice service.
The tools and options relating to this are explored earlier in this article, but they range widely in cost and quality, based on the scope of the service and the type of output that can be given to users.

Considerations

Creating a voice-first service involves additional considerations as compared to other digital services.
First and foremost, user privacy is getting increased attention as audio recordings of users are sent to the platform and/or brand and often stored there. Depending on the manner in which the service is available to users this may be an issue just for the platform involved, or may be something the brand needs to address directly.
Recently the C4 show ‘Celebrity Hunted’ caused a bit of a backlash against Alexa as users saw first hand the power of the stored recordings being available in the future. There are also worries about the ‘always on’ potential of the recording, despite major platforms repeatedly trying to assure users that only phrases starting with the keyword get recorded and sent to the cloud.
As with most things however, a reasonable value exchange is the safest way to proceed. Essentially, ensure that the offering is useful or entertaining.
A phone on a dinner table is a lot more socially acceptable than it was a few years ago. Talking out loud to devices will go the same way. Image via musely.com
Another consideration, as touched upon earlier in this article, is that the right service for a voice-first interface may not be something your brand already offers — or at the least that the service may need adaptation to be completely right for the format. We’ve found during workshops that the most interesting use cases for branded voice services often require branching out into whole new areas.
Perhaps most interestingly, this area allows for a whole new interesting set of data to be collected about users of the service — actual audio recordings aside, novel services being used in new contexts (at home without a device in hand, multiuser, etc) should lead to interesting new insights.

Recommendations for brands

We believe that long term, many brands will benefit from having some or all of their core digital services available over voice interfaces, and that the recent proliferation of the technology has created opportunities in the short and medium terms as well.
A good starting point is to start to include voice platforms in the mix for any long term planning involving digital services.
Ideally brands should start to work on an overall voice (or agent, including bots) strategy for the long term. This would encompass which services might best be offered in these different media, and how they may interact with customer services, CRM, social and advertising functions as well as a roadmap to measure progress against.
In the short term, we believe brands ought to experiment using off-the-shelf tools to rapidly prototype and even to create short-lived productions, perhaps related to campaigns.
The key area to focus on for these experiments should be how the overall brand style, tone of voice, and customer service scripts convert into a voice persona, and how users respond to variations in this persona.
This experimentation can be combined with lightweight voice-first service design in service of campaigns, but used to build an overall set of guides and learnings that can be used for future core brand services.

Tuesday, January 9, 2018

Who owns the internet?


Six perspectives on net neutrality

This week, the Federal Communications Commission will vote on the future of net neutrality. Whether you’ve been following the political back and forth, skimming the headlines, or struggling to decode acronyms, the decision will have an impact on what we can do online (and who can afford to do it). Because the internet has effectively been free and open since the day it was born, it’s easy to lose sight of the impact this vote will have.
The reality is, the internet is a fragile thing. Open, crazy, weird spaces where people swap stories and secrets, create rad digital art projects, type furiously and freely with people seven time zones away — these spaces are rare. People build them, people sustain them, and now, people are trying to restrict them. If this week’s vote passes — which is looking increasingly likely — the internet’s gatekeepers will have more control over their gates than ever before.
Because we live and breathe the internet, laugh and cry on the internet, connect with people who’ve tangibly changed our lives on the internet, we decided to gather some perspectives on this moment in time. Why it matters, how we got here, and what the future may hold. Here are some of the most insightful essays we’ve found on Medium to help us make sense of the fight to keep the net wild and free.

In 1989, Tim Berners-Lee invented the World Wide Web. Now, he’s defending it. “I want an internet where consumers decide what succeeds online, and where ISPs focus on providing the best connectivity,” Berners-Lee emphasizes. Content and connectivity are two distinct markets, and they must remain separate. Conflating them risks blocking innovation, free expression, and the kind of creativity that can only thrive online.
What’s happening now is not just about net neutrality, law professor Lawrence Lessig argues, but about the foundations of our democracy. Tracing the history of the concept from its origins in the aughts (one of his students, Tim Wu, coined the term “net neutrality”), Lessig sees the rollback of Obama-era regulations as a symptom of a larger issue: a democracy that doesn’t serve its people.
Through statistical analysis and natural language processing, data scientist Jeff Kao shows that millions of pro-repeal comments submitted to the FCC were faked. Organic public comments, according to Kao’s analysis, overwhelmingly supported preserving existing regulations. The report calls into question the legitimacy of the FCC’s comment process, and the basis of chairman Pai’s intention to roll back regulations.
In part one of a five-part series on net neutrality, computer scientist Tyler Elliot Bettilyon takes us back to FDR’s New Deal. Piecing together the history of “common carrier” laws — those that govern everything from shipping to telephone lines — Bettilyon contextualizes today’s fight for a free and open internet.
Social psychologist E Price interrogates the idea that the internet we’ve grown to love is really as “free and open” as we’d like to think. “Internet activity is already deeply centralized,” Erika writes, and major social media sites are today’s answer to the Big Three TV networks of a few decades ago. The internet is closer to cable than we think, and it’s (probably) about to get even closer.
Why should the internet be a public utility? Economist umair haque debunks the “competition will lower prices” argument against internet regulation, and makes a compelling case for why going online, “just like water, energy, and sanitation,” should be a basic right: “It dramatically elevates our quality of life, best and truest when we all have free and equal access to it.”
Visit battleforthenet to write or call your congressperson in advance of the vote. You can also text a few words of your choice to Resistbot.

Saturday, January 6, 2018

What we learned about productivity from analyzing 225 million hours of working time in 2017


This post was originally published on the RescueTime blog. Check us out for more like it.
When exactly are we the most productive?
Thinking back on your last year, you probably have no idea. Days blend together. Months fly by. And another year turns over without any real understanding of how we actually spent our time.
Our mission at RescueTime has always been to help you do more meaningful work. And this starts with understanding how you spend your days, when you’re most productive, and what’s getting in your way.
In 2017, we logged over 225 million hours of digital time from hundreds of thousands of RescueTime users around the world.
By studying the anonymized data of how people spent their time on their computers and phones over the past 12 months, we’ve pinpointed exactly what days and times we do the most productive work, how often we’re getting distracted by emails or social media, and how much time a week we actually have to do meaningful work.
Key Takeaways:

What was the most (and least) productive day of 2017?

Simply put, our data shows that people were the most productive on November 14th. In fact, that entire week ranked as the most productive of the year.
Which makes sense. With American Thanksgiving the next week and the mad holiday rush shortly after, mid-November is a great time for people to cram in a few extra work hours and get caught up before gorging on Turkey dinner.
On the other side of the spectrum, we didn’t get a good start to the year. January 6th — the first Friday of the year — was the least productive day of 2017.

Now, what do we mean when we talk about the “most” or “least” productive days?

RescueTime is a tool that tracks how you spend your time on your computer and phone and let’s you categorize activities on a scale from very distracting to very productive. So for example, if you’re a writer, time spent in Microsoft Word or Google Docs is categorized as very productive while social media is very distracting.
From that data, we calculate your productivity pulse — a score out of 100 for how much of your time you spent on activities that you deem productive.
On November 14th, the average productivity pulse across all RescueTime users was a not-so-shabby 60.

How much of our day is spent working on a digital device?

One of the biggest mistakes so many of us make when planning out our days is to assume we have 8+ hours to do productive work. This couldn’t be further from the truth.
What we found is that, on average, we only spend 5 hours a day working on a digital device.
And with an average productivity pulse of 53% for the year, that means we only have 12.5 hours a week to do productive work.

What does the average “productive day” look like?

Understanding our overall productivity is a fun exercise, but our data lets us go even deeper.
Looking at the workday (from 8am–6pm, Monday to Friday), how are we spending our time? When do we do our best work? Do different tasks normally get done at different times?
Here’s what we found out:

Our most productive work happens on Wednesdays at 3pm

Our data showed that we do our most productive work (represented by the light blue blocks) between 10 and noon and then again from 2–5pm each day. However, breaking it down to the hour, we do our most productive work on Wednesdays at 3pm.
Light blue represents our most productive work

Email rules our mornings, but never really leaves us alone

Our days start with email, with Monday morning at 9am being the clear winner for most time spent on email during the week.
Light blue represents our busiest time for emails

Software developers don’t hit peak productivity until 2pm each day

What about how specific digital workers spend their days?
Looking at the time spent in software development tools, our data paints a picture of a workday that doesn’t get going until the late morning and peaks between 2–6pm daily.
Light blue represents when we’re using software development tools

While writers are more likely to be early birds

For those who spend their time writing, it’s a different story.
Writing apps were used more evenly throughout each day with the most productive writing time happening on Tuesdays at 10am.
Light blue represents when we’re using writing tools

What were the biggest digital distractions of 2017?

It’s great to pat ourselves on the back about how productive we were in 2017. But we live in a distracted world and one of our greatest challenges is to stay focused and on task.
Here’s what our research discovered about the biggest time wasters of last year:

On an average day we use 56 different apps and websites

Depending on what you do, this number might not seem that bad. However, when we look at how we use those different apps and websites, things get a bit hairier.
When it comes to switching between different apps and websites (i.e. multitasking), we jump from one task to another nearly 300 times per day and switch between documents and pages within a site 1,300 times per day.

For Slack users, 8.8% of our day is spent in the app

There’s been a lot of talk about how much email and communication eats into our days. But what do the numbers look like?
What we found is that for people who use Slack as their work communication tool, they spend almost 10% of their workday in the app (8.8% to be exact).

We check email or IM 40 times every day

What’s more telling is how often we check our communication tools, whether email or instant messengers like Slack or HipChat.
On average, we check our communication apps 40 times a day, or once every 7.5 minutes during our 5 hours of daily digital work time.

Almost 7% of every workday is spent on social media

I’m sure most of us try not to spend time on social media while at work. But our data showed that almost 7% of every workday was spent on social media.
It’s not only time spent that’s the issue, however. On average, we check in on social media sites 14 times per workday, or nearly 3 times an hour during our 5-hour digital day.

So, what does all this tell us about how we spend our days?
Well, first off, we need to remember that averages shouldn’t be treated as universal truths. Everyone works differently. But having a high-level look at productivity and the things that get in its way is a powerful tool in improving how you work.
The biggest piece of advice we can pull from all this data is to be aware of the limited time you have each day for meaningful work, and spend it wisely.
Our days are filled with distractions, and it’s up to us to protect what time we have.

Friday, January 5, 2018

How Uber was made


Uber has transformed the world. Indeed, its inconceivable to think of a world without the convenience of the innovative ride sharing service. Tracing its origins in a market which is constantly being deregulated, Uber has emerged triumphant. Operating in over 58 countries and valued roughly at US$ 66 billion, Uber has rapidly expanded to established branches in over 581 cities in over 82 countries with the United States, Brazil, China, Mexico and India being Uber’s most active countries.
If that wasn’t impressive enough, in 2016 the company completed a total of 2 billion rides in one week. When you consider the fact that the first billion rides took Uber 6 years, and the second billion was garnered in a mere 6 months, it’s not surprising to see Uber emerge as a global business leader. This worldwide phenomenon is built on a simple idea, seductive in its premise - the ability to hail a car with nothing but your smartphone.
It took the problem of hailing a taxi and gave everyone an equitable solution while further capitalizing on the emerging market. And smart people are asking the right question: How do I build an app like Uber for my business needs?

Humble Beginnings

It all started in 2008, with the founders of Uber discussing the future of tech at a conference. By 2010, Uber officially launched in San Francisco. In 6 months, they had 6,000 users and provided roughly 20,000 rides. What was the key to their success? For one, Uber’s founders focused on attracting both drivers and riders simultaneously. San Francisco was the heart of the tech community in the US and was thus the perfect sounding board for this form of technological innovation to thrive.
In the beginning, Uber spread their App through word of mouth, hosting and sponsoring tech events, and giving participants of their events free rides with their app. This form of go-to-marketing persists today - giving 50% discounts to new riders for their first Uber ride. This initial discount incentivized users to become long term riders, and the rest was history. As more and more people took to social media to tell the world about this innovative new App - the sheer brilliance of their marketing strategy paid off.

Product Technology Cohesion: How Uber Works

What makes Uber, Uber? For one, it’s the ubiquitous appeal, or the way in which they streamlined their product, software and technology. It was, at the start, fresh, innovative, and had never been seen before. So if one were to replicate the model, they’d need to look at Uber’s branding strategy.
To use Uber, you have to download the app, which launched first on iPhone, then extended to Android and Blackberry.
Uber’s co-founders, Garret Camp and Travis Kalanick, relied heavily on 6 key technologies based on iOS and Android geolocation. What really sold it though, was its clear core value - the ability to map and track all available taxis in your given area. All other interactions are based on this core value - and its what sets Uber (and will set your app) apart from the crowd. To build an App like Uber, you’ll need to have:
1. Registering/Log-in features: Uber allows you to register with your first name, last name, phone number and preferred language. Once you’ve signed up, they’ll send you an SMS to verify your number, which will then allow you to set your payment preferences. Trip fares are charged after every ride through this cashless system.
2. Booking features: This allows drivers the option to accept or deny incoming ride requests and get information on the current location and destination of the customer.
3. The ability to Identify a Device’s location: Uber, via CoreLocation framework (for iOS platforms) obtains the geographic location and orientation of a device to schedule location and delivery. Understanding iOS and Android geolocation features is crucial for this step, because that’s what your App is running on.
4. Point to Point Directions: The Uber App provides directions to both the driver and the user. Developers of the Uber App use MapKit for iOS and Google Maps Android API for Android to calculate the route and make directions available. They further implemented Google Maps for iPhone and Android, but cleverly adapted technology from other mapping companies to solve any logistical issues that might come up.
5. Push Notifications and SMS: You get up to 3 notifications instantly from Uber when you book a ride.
  • A notification telling you when the driver accepts your request
  • One when the driver is close to your location
  • One in the off chance your ride has been cancelled
You further get the full update on your driver’s status, down to the vehicle make and license number, and an ETA on the taxi’s time of arrival.
6. Price Calculator: Uber offers a cashless payment system, paying drivers automatically after every ride, processed through the user’s credit card. Uber takes 25% of the driver’s fare, making for easy profit. They paired with Braintree, a world leader in the mobile payment industry, but other good options avaible are Stripe, or Paypal, via Card.io.
Here are few more much sought after features for the user’s side of the App:
  • The ability to see the driver’s profile and status: Your customers will feel safer being able to see your driver’s verification, and it’s makes good security sense to ensure you know who’s using your App for profit.
  • The ability to receive alerts: Receive immediate notifications about the status of your ride and any cancellations.
  • The ability to see the route from Their Phones (An In built Navigation system): This is intrinsically linked to your geolocation features, you want to be able to direct your taxis to the quickest, most available routes.
  • Price calculation: Calculating a price on demand and implementing a cashless payment system.
  • A “spilt fare” option: Uber introduced this option wit great success. It allows friends to spilt the price of the ride.
  • Requesting previous drivers: It’s a little like having your favourite taxi man on speed dial, and is a good way of ensuring repeat customers.
  • Waitlist instead of surge pricing: Avoid the media hassle of employing surge pricing by employing a wait list feature, so your users can be added to a waiting list rather than be charged more than they should, and to keep them from refreshing the App during peak hours, reducing the resources required by your backend infrastructure.
Another key to Uber’s success, that should be noted by potential developers of similar Apps, is the way in which Uber operates. They tap into more than one market which equates to more riders, more drivers, and more business for the company. Uber has mastered the art of localization - the ability to beat out pre-existing markets and competitors, which further retains their customer base by improving their own business strategy.
They’ve taken local context and circumstances into consideration. For example, they partnered with Paypal in November 2013 to provide as many people in Germany don’t use credit cards, and switched to services based on SMS messages in Asia as there are more people but fewer smart phones per capita. This helps them cater to various markets and and optimize profits.
The Uber marketing strategy isn’t static - it’s dynamic. Expansion was necessary, and the business model reaps profits from saturating the taxi market with their customers and drivers, driving their exponential growth. What aspiring App developers can take from this is that you need to design your App for flexibility.
Design your App in a way that’s going to let it take a hit and roll with punches. Having a system in place that allows you to build and integrate changes effectively within the App and allows team members to communicate effectively is of paramount importance.
What made Uber so successful was its ability to reshape how we think about technology and its operation. Indeed it made the market a better, more efficient place through the innovative on-demand service.

What Technology is Uber Built on?

The tech side of the App is written largely in JavaScript which is also used to calculate supply and predict demand. With the real time dispatch systems being built on Node.js and Redis. Java, as well as Objective-C is used for the iPhone and Android apps. Twilio is the force behind Uber’s text messages, and push notifications are implemented through Apple Push Notifications Service on the iOS platform and Google Cloud Messaging (GCM) for the Android App.

How much does Uber make?

Actually, it’s a lot less than you think. The $66 billion valuation, after the 25% commission (which rounds out to about $0.19 per ride) mostly goes towards credit card processing, interest, tax, compensation for employees, customer support, marketing, and various anti-fraud efforts.

How much does it take to build Uber?

Uber’s not just one App, it’s two - one for the rider and one for the driver. The cost of developing an App like Uber is dependent on a number of factors
  • the cost of building an MVP
  • product development and acquisition
  • getting the economics of marketing sorted
  • the constant cost of building on and improving your App’s analytic capabilities
When you make an App like Uber, you’ll invest a fair bit into design services, backend and web development, project management, not to mention Android and iOS native app development. The total man hours round out to around 5000 hours for similar on demand taxi Apps, which puts the cost of developing such an App to around $50,000 (assuming that your team works for $50 dollars an hour). However, since hourly rates roughly range from $20 to $150, median costs could be higher or lower.

Conclusion

To wrap up, Ubers success was due to several factors, including a clear business model and interaction based features, and not the other way around combined with a marketing strategy focusing on attracting users.
The question on everyone’s mind of course is how can you reduce the overall risk of failure by making sure that your idea and product are viable when you’re developing an App?
One way is to use a Mobile App development partner (such as Octodev) that has worked on many such Apps and understands the processes involved. An advance of using such a partner is they’ve worked on many such App development projects and have the practical experience in product development to avoid the pitfalls and make the most of your vision.
Octodev App Development Process
Another important part of ensuring that your App development project is swiftly and smoothly executed is having a clear road map and regular communication during the project. There are many approaches to achieve this and we, at Octodev, use a consultative approach to App development. We draw from our successful App implementations. Get in touch with us now if you want an accurate cost for your own Uber like App idea.
This article was originally published on the Octodev Blog.

Interested for our works and services?
Get more of our update !