Samsung has launched the world's first-ever 4-rear camera smartphone Samsung Galaxy A9 (2018). The phone was launched at an event held in Malaysia's Kuala Lumpur on Thursday.
The biggest feature of the Samsung Galaxy A9 (2018) is the launch of
the Samsung Galaxy A9 (2018) 4 rear camera and has become the world's
first smartphone with 4-rear camera. Let's tell you that the Galaxy A7 was launched with three rear cameras.
Samsung Galaxy A9 (2018) specification
This phone has the Android Orio 8.1 and 6.3-inch Full HD Plus Super Amoled display with dual SIM support. Apart from this, the phone will have Qualcomm Snapdragon 660 processor, up to 8 GB of RAM and 128 GB of storage.
Samsung Galaxy A9 is a 4-rear camera in 2019 with a 24-megapixel main
lens, the second lens is a 10-megapixel telephoto with 2x optical zoom. The third lens is a 8-megapixel ultra wide angle lens and a fourth 5-megapixel lens. The four cameras are from the top down from the same line. The front has a 24-megapixel camera.
Samsung Galaxy A9 (2018) has a 3800 mAh battery that supports fast charging. There will be a fingerprint sensor in the phone's power button.
Price of Samsung Galaxy A9 (2018)
The price of Samsung Galaxy A9 (2018) is 599 euros, which is approximately Rs 51,300. However, there is still no explanation about how much Samsung Galaxy A9 (2018) will be worth in India. This phone will be available in Bubblegum Pink, Caver Black and Lemonade Blue Color Variants.
I
started writing a blog in May 2016, partly because I kept writing rants
on Facebook that apparently were “too good not to be online somewhere”,
and partly because I was bored after my Master’s degree and wanted
something to do with my Sunday mornings.
Sleeping in, of course, was never an option.
18
months later, and I’ve written about 100,000 words, been published in
all sorts of places, and am now getting regular offers to pitch to major
publications — more on this in the coming months.
And
most importantly of all, I got to 10,000 followers. This time last
year, it was 100 and about half of them were related to me.
All in all, it’s been a good year.
So
what’s in store for the Health Nerd? You’ll be happy to know that this
year I’ve applied for a PhD with the University of Wollongong, which is
actually super exciting and not scary like it feels to me sometimes. I’m
also going to be — hopefully — releasing some episodes of a podcast
that I’ve started with a brilliant co-host. The topic will be science in
the media and I’m really excited to introduce all of you to my dulcet
tones over the airwaves.
I’m so much less awkward than I am in text.
What
does all of this activity mean to the blog? Nothing! I’ll still be
aiming for my regular one health story a week on Medium, as well as an
extra member’s-only article a month for all you subscribers who love
that extra content.
To
sum up, I’d just like to say thank you to you all. I’d never have made
it here without all you brilliant people following me and making this
all worthwhile. It was a fantastic 2017, and 2018 shows every sign of
being brilliant as well.
A
prankster who made repeated hoax distress calls to the US Coast Guard
over the course of 2014 probably thought they were untouchable. They
left no fingerprints or DNA evidence behind, and made sure their calls
were too brief to allow investigators to triangulate their location.
Unfortunately
for this hoaxer, however, voice analysis powered by AI is now so
advanced that it can reveal far more about you than a mere fingerprint.
By using powerful technology to analyse recorded speech, scientists
today can make confident predictions about everything from the speaker’s
physical characteristics — their height, weight, facial structure and
age, for example — to their socioeconomic background, level of income
and even the state of their physical and mental health.
One of the leading scientists in this field is Rita Singh of Carnegie Mellon University’s Language Technologies Institute.
When the US Coast Guard sent her recordings of the 2014 hoax calls,
Singh had already been working in voice recognition for 20 years. “They
said, ‘Tell us what you can’,” she told the Women in Tech Show podcast earlier this year. “That’s when I started looking beyond the signal. How much could I tell the Coast Guard about this person?”
What your voice says about you
The
techniques developed by Singh and her colleagues at Carnegie Mellon
analyse and compare tiny differences, imperceptible to the human ear, in
how individuals articulate speech. They then break recorded speech down
into tiny snippets of audio, milliseconds in duration, and use AI
techniques to comb through these snippets looking for unique
identifiers.
Your
voice can give away plenty of environmental information, too. For
example, the technology can guess the size of the room in which someone
is speaking, whether it has windows and even what its walls are made of.
Even more impressively, perhaps, the AI can detect signatures left in
the recording by fluctuations in the local electrical grid, and can then
match these to specific databases to give a very good idea of the
caller’s physical location and the exact time of day they picked up the
phone.
This
all applies to a lot more than hoax calls, of course. Federal criminal
cases from harassment to child abuse have been helped by this relatively
recent technology. “Perpetrators in voice-based cases have been found,
have confessed, and their confessions have largely corroborated our
analyses,” says Singh.
Portraits in 3D
And
they’re just getting started: Singh and her fellow researchers are
developing new technologies that can provide the police with a 3D visual
portrait of a suspect, based only on a voice recording. “Audio can us
give a facial sketch of a speaker, as well as their height, weight,
race, age and level of intoxication,” she says.
But
there’s some way to go before voice-based profiling technology of this
kind becomes viable in a court. Singh explains: “In terms of
admissibility, there will be questions. We’re kind of where we were with
DNA in 1987, when the first DNA-based conviction took place in the
United States.”
This
has all proved to be bad news for the Coast Guard’s unsuspecting
hoaxer. Making prank calls to emergency services in the US is regarded
as a federal crime, punishable by hefty fines and several years of jail
time; and usually the calls themselves are the only evidence available.
Singh was able to produce a profile that helped the Coast Guard to
eliminate false leads and identify a suspect, who they hope to bring a
prosecution soon.
Given
the current exponential rate of technological advancement, it’s safe to
say this technology will become much more widely used by law
enforcement in the future. And for any potential hoax callers reading
this: it’s probably best to stick to the old cut-out newsprint and glue
method for now. Just don’t leave any fingerprints.
Have you read Paid Applications Agreement, Schedule 2, Section 3.8(b)?
If
you’ve ever submitted an app to the App Store, you know the frustration
when Apple rejects your submission. Even more so when you thought you’d
followed all the rules. As it turns out, Apple can bury requirements
wherever they want, and it’s your burden to keep up.
About
a year ago, Apple started rejecting apps that didn’t comply with
Schedule 2, Section 3.8(b) of the Paid Applications Agreement, a verbose
list of self-evident truths about subscriptions. The Paid Applications
Agreement is a 37-page document that you had to agree to before you
could submit your app. It is only available via iTunes Connect in the
form of downloadable PDF.
The actual contents of Schedule 2, Section 3.8(b):
3.8(b)
requires that you “clearly and conspicuously disclose to users” all of
the above bullets. The first few items seem harmless enough but then we
start to get off into the weeds.
Apple
wants you to reproduce, “clearly and conspicuously”, all the details of
auto-renewing subscriptions. This information should be part of the
standard StoreKit subscription purchase flow. None of these bullets have
anything app specific to them. They are just boilerplate legalese.
Apple
has an iOS level user interface flow for in-app purchases that is quite
good as of iOS 11. This view already covers most of the in-the-weeds
bullets, except telling users about the 24-hour renewal policy.
Requiring
every developer to implement their version of 3.8(b) is costly and
creates a fractured experience for the user. Apple should be putting it
in the standard sheet. But it’s Apple’s walled garden. When they say
jump, you say “fine, whatever.”
How to Comply With 3.8(b)
According
to recent rejections that I’ve seen (as of Jan. 8th, 2018), reviewers
are being more particular about what your purchase flow requires. From a
recent rejection:
Adding
the above information to the StoreKit modal alert is not sufficient;
the information must also be displayed within the app itself, and it
must be displayed clearly and conspicuously during the purchase flow
without requiring additional action from the user, such as opening a
link.
All
of the information in 3.8(b) must be “displayed clearly and
conspicuously during the purchase flow without requiring additional
action from the user, such as opening a link.” Your beautiful and
compact purchase flow must include in it, somewhere, nine bullets
written by a lawyer.
Confide, recently updated, achieved it with the following:
According to one reviewer, being below the fold with a leading arrow qualifies as “clearly and conspicuously.”
For another data point, I know of one recently rejected developer who had the same information, but in another view that was linked from the purchase flow with a button. This did not qualify (according to one reviewer).
A Template
Include a customized version of the following “clearly and conspicuously” in your purchase flow:
A
[purchase amount and period] purchase will be applied to your iTunes
account [at the end of the trial or intro| on confirmation].
Subscriptions
will automatically renew unless canceled within 24-hours before the end
of the current period. You can cancel anytime with your iTunes account
settings. Any unused portion of a free trial will be forfeited if you
purchase a subscription.
For more information, see our [link to ToS] and [link to Privacy Policy].
Put
it on the screen where you initiate the in-app purchase, below the fold
might be OK, but you might want to put something to lead users there.
UPDATE:
Readers are telling me it may also be required that you include it in
your app store description. It’s a much easier change to include so I
recommend you add it there to.
Why has Apple Taken a Legal Problem and made it Ours?
Apple
shouldn’t be burying submission requirements in the bodies of contracts
that nobody will read. If Apple wants developers to know something,
they should put it in the App Store Guidelines, HIG, or developer
documentation. The cost of making changes in a software project right at
the end can be astronomical. Dropping a bomb like this on developers at
submission shows a total lack of regard for our costs.
Why
didn’t they just update the iOS in-app purchase sheet? I speculate that
Apple discovered some legal exposure from in-app subscriptions and
fixed it with lawyers instead of designers. This problem could be
universally solved with an iOS update, but I think some side effect of
Apple being a vast, lumbering bureaucracy made forcing 3.8(b) onto
developers the more politically convenient path. Apple, if you are
reading this, please either update the iOS sheet or move the
requirements to the App Store guidelines, so fewer developers get caught
unawares.
RevenueCat
is the best way to implement subscriptions in your mobile app. We
handle all the complicated parts so you can get back to building.
Request an invite today at https://www.revenuecat.com/
Social
media and digital executives in newsrooms already have a tough job
connecting their content to consumers via social media, but Facebook’s proposed changes in the algorithms of its ‘newsfeed’
are going to make it a lot harder. Social networks offer immense
opportunities for reaching vast new audiences and increasing the
engagement of users with journalism. The most important platform in the
world is about to make that more difficult.
Clearly,
this is a blow for news publishers who have spent the last decade or so
fighting a battle for survival in a world where people’s attention and
advertising have shifted to other forms of content and away from news
media brand’s own sites. They are clearly very concerned. Yet, could this be a wake-up call that will mean the better, most adaptive news brands benefit?
I’m
not going to argue that this is good news for news publishers, but
blind panic or cynical abuse of Facebook is not a sufficient response.
The honest answer is that we don’t know exactly what the effect will be
because Facebook, as usual, have not given out the detail and different
newsrooms will be impacted differently.
It’s exactly the kind of issue we are looking at in our LSE Truth, Trust and Technology Commission.
Our first consultation workshop with journalists, and related
practitioners from sectors such as the platforms, is coming up in a few
weeks. This issue matters not just for the news business. It is also
central to the quality and accessibility of vital topical information
for the public.
Here’s my first attempt to unpack some of the issues.
Firstly,
this is not about us (journalists). Get real. Facebook is an
advertising revenue generation machine. It is a public company that has a
duty to maximise profits for its shareholders. It seeks people’s
attention so that it can sell it to advertisers. It has a sideline in
charging people to put their content on its platform, too. It is a
social network, not a news-stand. It was set up to connect ‘friends’ not
to inform people about current affairs. Journalism, even where shared
on Facebook, is a relatively small part of its traffic.
Clearly,
as Facebook has grown it has become a vital part of the global (and
local) information infrastructure. Other digital intermediaries such as
Google are vastly important, and other networks such as Twitter are
significant. And never forget that there are some big places such as
China where other similar networks dominate, not Facebook or other
western companies. But in many countries and for many demographics,
Facebook is the Internet, and the web is increasingly where people get their journalism. It’s a mixed and shifting picture but as the Reuters Digital News Report shows, Facebook is a critical source for news.
From Reuters Digital News Report 2017
If you read Zuckerberg’s statement he makes it clear that he is trying to make Facebook a more comfortable place to be:
“recently
we’ve gotten feedback from our community that public content — posts
from businesses, brands and media — is crowding out the personal moments
that lead us to connect more with each other.”
His users are ‘telling him’ (i.e. fewer of them are spending less time on FB) what a plethora of recent studies and books
have shown which is that using Facebook can make you miserable. News
content — which is usually ‘bad’ news — doesn’t cheer people up. The
angry, aggressive and divisive comment that often accompanies news
content doesn’t help with the good vibes. And while the viral spread of
so-called ‘fake news’ proves it is popular, it also contributes to the
sense that Facebook is a place where you can’t trust the news content.
Even when it is credible, it’s often designed to alarm and disturb. Not
nice. And Facebook wants nice.
“We
can’t make money unless you keep telling us things about yourself that
we can sell to advertisers. Please stop talking about news.”
Another
accusation is that Facebook is making these changes because of the
increasing costs it is expending at the behest of governments who are
now demanding it does more to fight misinformation and offensive
content. That might be a side-benefit for Facebook but I don’t think
it’s a key factor. It might even be a good thing for credible news if
the algorithmic changes include ways of promoting reliable content. But
overall the big picture is that journalism is being de-prioritised in
favour of fluffier stuff.
Even Jeff Jarvis, the US pioneer of digital journalism who has always sought to work with the grain of the platforms, admits that this is disturbing:
“I’m
worried that news and media companies — convinced by Facebook (and in
some cases by me) to put their content on Facebook or to pivot to
video — will now see their fears about having the rug pulled out from
under them realized and they will shrink back from taking journalism to
the people where they are having their conversations because there is no
money to be made there.”*
The
Facebook changes are going to be particularly tough on news
organisations that invested heavily in the ‘pivot to video’. These are
often the ‘digital native’ news brands who don’t have the spread of
outlets for their content that ‘legacy’ news organisations enjoy. The
BBC has broadcast. The Financial Times has a newspaper. These
organisations have gone ‘digital first’ but like the Economist they have
a range of social media strategies. And many of them, like the New York
Times, have built a subscription base. Email newsletters provide an
increasingly effective by-pass for journalism to avoid the social media
honey-trap. It all makes them less dependent on ‘organic’ reach through
Facebook.
But
Facebook will remain a major destination for news organisations to
reach people. News media still needs to be part of that. As the
ever-optimistic Jarvis also points out,
if these changes mean that Facebook becomes a more civil place where
people are more engaged, then journalism designed to fit in with that
culture might thrive more:
“journalism
and news clearly do have a place on Facebook. Many people learn what’s
going on in the world in their conversations there and on the other
social platforms. So we need to look how to create conversational news.
The platforms need to help us make money that way. It’s good for
everybody, especially for citizens.”
News
organisations need to do more — not just because of Facebook but also
on other platforms. People are increasingly turning to closed networks
or channels such as Whatsapp. Again, it’s tough, but journalism needs to
find new ways to be on those. I’ve written huge amounts
over the last ten years urging news organisations to be more networked
and to take advantage of the extraordinary connective, communicative
power of platforms such as Facebook. There has been brilliant
innovations by newsrooms over that period to go online, to be social and
to design content to be discovered and shared through the new networks.
But this latest change shows how the media environment continues to
change in radical ways and so the journalism must also be reinvented.
Social media journalist Esra Dogramaci has written an excellent article
on some of the detailed tactics that newsrooms can use to connect their
content to users in the face of technological developments like
Facebook’s algorithmic change:
“if
you focus on building a relationship with your audience and developing
loyalty, it doesn’t matter what the algorithm does. Your audience will
seek you out, and return to you over and over again. That’s how you
‘beat’ Facebook.”
Journalism Must Change
The
journalism must itself change. For example, it is clear that emotion is
going to be an even bigger driver of attention on Facebook after these
changes. The best journalism will continue to be factual and objective
at its core — even when it is campaigning or personal. But as I have written before,
a new kind of subjectivity can not only reach the hearts and minds of
people on places like Facebook, but it can also build trust and
understanding.
This
latest change by Facebook is dramatic, but it is a response to what
people ‘like’. There is a massive appetite for news — and not just
because of Trump or Brexit. Demand for debate and information has never
been greater or more important in people’s everyday lives. But we have
to change the nature of journalism not just the distribution and
discovery methods.
The media landscape is shifting to match people’s real media lives in our digital age. Another less noticed announcement from Facebook
last week suggested they want to create an ecosystem for local
personalised ‘news’. Facebook will use machine learning to surface news
publisher content at a local level. It’s not clear how they will vet
those publishers but clearly this is another opportunity for newsrooms
to engage. Again, dependency on Facebook is problematic, to put it
mildly, but ignoring this development is to ignore reality. The old
model of a local newspaper for a local area doesn’t effectively match
how citizens want their local news anymore.
What Facebook Must Do
Facebook
has to pay attention to the needs of journalism and as it changes its
algorithm to reduce the amount of ‘public content’ it has to work harder
at prioritising quality news content. As the Guardian’s outstanding
digital executive Chris Moran points out, there’s no indication from
Facebook that they have factored this into the latest change:
Fighting
‘fake news’ is not just about blocking the bad stuff, it is ultimately
best achieved by supporting the good content. How you do that is not a
judgement Facebook can be expected or relied upon to do by itself. It
needs to be much more transparent and collaborative with the news
industry as it rolls out these changes in its products.
When
something like Facebook gets this important to society, like any other
public utility, it becomes in the public interest to make policy to
maximise social benefits. This is why governments around the world are
considering and even enacting legislation or regulation regarding the
platforms, like Facebook. Much of this is focused on specific issues
such as the spread of extremist or false and disruptive information.
Looking ahead, the app business is expected to do even better, with global mobile app revenue forecast for 2020 at $189 billion.
For
budding developers, it’s time to hop aboard the gravy train. But what’s
the first step in learning mobile app development? What courses should
you sign up for? Should you teach yourself app development? We’ve got
you covered.
And yes, the first step is
learning how to prototype a mobile app. Learn why here — plus get our
top 10 online courses on mobile app development to get you started right
away, no matter where you are!
10 free and paid online courses to help you learn mobile app development
Here are our top 10 online courses to help you learn mobile app development:
1 — Android Development Tips Weekly series on Lynda
Each
week, David shares techniques to help you speed up your coding, improve
app functionality or make your apps more reliable and refined.
The
tutorials cover developing the app’s user interface, backend processing
and open source libraries, to get your coding knowledge off the ground
even quicker.
Level: Beginner — Intermediate
Commitment: approximately 3h per video
Price-point: 30-day free trial, from $19.99 thereafter
Learn
how to create and customize 10+ iPhone apps (using Swift 3 and Xcode 8)
with easy step-by-step instructions. The course begins with
implementation of basic elements — UILabel, UIButton, UITextField
etc. — Auto Layout and multiple-sized icons, with more advanced classes
covering memory issues, storyboarding and displaying rich local
notifications.
Note that this course requires you to own and already be familiar with Mac.
Level: Beginner
Commitment: approximately 33 hours
Price-point: $10.99 (New Year discount, was $50.00)
3 — iOS App Development with Swift Specialization on Coursera
This is the ultimate Swift for iOS development course, brought to you by Parham Aarabi and the University of Toronto.
Using
XCode, Parham will teach you how to design elegant interactions and
create fully functioning iOS apps, such as the photo editing app for
iPhone, iPad, and Apple Watch. The course also includes best practices
to help you become proficient in functional Swift concepts.
Note that this course requires you to own and already be familiar with Mac.
In
this 5-week course, you’ll explore the basics of Android application
components as well as Activities and their lifecycle, some UI design
principles, Multimedia, 2D graphics and networking support for Android.
Level: Beginner
Commitment: 6 weeks
Price-point: free
5 — Full Stack Web and Multiplatform Mobile App Development Specialization on Coursera
If you’re learning mobile application development for Android and found the above course useful, try this course out next.
Here
you’ll have the chance to build complete web and hybrid mobile
solutions, as well as master front-end web, hybrid mobile app and
server-side development.
Price-point: 7-day free trial, $39 per month thereafter
6 — iOS 9 and Swift 2: From Beginner to Paid Professional on Skillshare
Mark Price’s online course for iOS Swift is everything you need to know about iOS 9 development.
This
is another great set of classes for novice iOS coders. Build 15+ apps
for iOS 9, learn swift 2.0 and publish apps to the App Store. Warmups,
class projects and exercises will help you keep on top of the workload.
Level: Beginner
Commitment: approximately 37 hours
Price-point: from $15 a month
7 — The iOS Development Course That Gets You Hired on Career Foundry
1-on-1
mentorship from industry experts and real-world projects complement a
set of 6 structured modules. The course covers the very basic principles
of iOS development and takes you right to the point of submitting an
app to the App Store.
Level: Beginner
Commitment: 6 months
Price-point: $4000 (payment plans available)
8 — Get Started With React Native on TutsPlus
Markus Mühlberger’s course for React Native is perfect for anyone who wants to code for multiple mobile platforms.
Learn
how to create and customize UI elements, build user interaction, and
integrate third-party components into apps for both iOS and Android.
Upon completion, you’ll be able to write mobile apps in React Native.
Level: Intermediate
Commitment: 1.2 hours
Price-point: $29 a month
9 — Build a Simple Android App with Java on Treehouse
Ben Deitch’s course will help you build simple mobile apps for Android with Java, without any prior knowledge.
Best-suited
to budding Android developers, this course will explore programming in
Android and some very basic concepts of the Android SDK. By the end of
the course, you’ll have a working knowledge of how a basic app works.
Level: Beginner
Commitment: 1.5 hours
Price-point: from $25 a month
10 — Try iOS on Code School
Gregg Pollack’s tutorials on iOS app development from the ground up and requires only basic coding experience.
Write
your first iPhone app code and learn about different UI elements, such
as buttons, labels, tabs and images. Upon completion, you’ll be able to
connect to the internet to fetch data, build out table views and
navigate between different areas of your app.
Level: Beginner
Commitment: 6–8 hours
Price-point: $29 a month
It’s
an exciting time for mobile app developers. And as you can see, there
are plenty of resources out there to help get your career off the
ground. But don’t forget to look at the big picture.
Prototyping is an integral part of the mobile app life cycle. Download Justinmind now and explore a prototyping tool that’s made with the entire product team in mind.
With
the introduction of OLED screens to the iPhone X, more and more people
are requesting night themes in their favourite apps to take advatage of
the true blacks on OLED screens, to save battery, and to make it easier
on the eyes in some cases. But should you add this option to your app?
Don’t confuse choice with convenience.
If
you ask any user if they’d want the option of night mode in your app,
they would say yes. As consumers we think we need more choices. It
sounds very logical. The more choices I have, the more likely I am to
choose something that suits me and makes me happy. But does more choice actually make users happier? In the TED Talk, The Art of Choosing, Sheena Iyengar explains how that might not actually be true.
Just
because users are asking for options, doesn’t mean they’re going to
start using them or that it’s the right choice for them. Depending on
the type of content that you provide to your users, a night mode might
actually hurt their engagement.
You have to ask yourself why you’re thinking about a night mode. If
you’re doing it solely to give your users options, then please, do
yourself and your users a favour and stop. There are many downsides to
having a night mode that you have to consider and be OK with before
adding it to your app.
A
night mode creates inconsistency within your app. It’s already hard
enough to keep your apps consistent with iOS and Android, and if you
have a website having that be consistent with everything too. Why would
you go out of your way to make it even more difficult for yourself?
A
night mode might reduce your users’ engagement with your app. Your
users are the reason that you have created your app. They have been
using your app and are used to it. If you have good information
architecture and user experience, they might be even using your app with
muscle memory. These users are your friends. They have already
memorized your app’s hierarchy and are using affordances and clues in
your app to navigate it fluently. Introducing a dark mode would change
all of that. Now they have to re-learn your app. Even though everything
is in the same place, they have to re-learn the affordances and clues
and repeat the process of getting used to it all over again, and this
risks alienating your users. They might see the dark mode and think
that’s a good choice for them and turn it on, but the next time they
open your app they won’t know how to navigate it and it will feel
strange. Remember when Instagram switched their UI design to the new
flat one with the new logo and everyone was running around setting
things on fire and protesting on the streets? Ok no one protested on the
streets but some users were pissed. Do you want your users to be
pissed? Looking back the re-design of Instagram was a success because it
simplified the interface to make room for new features like stories and
bookmarking photos and such. But a night mode is not a re-design.
Instead of moving your design forward, you would give it a split
personality.
Designing
a night mode for an app is no easy task either. You might think that
it’s just as easy as flipping the background and text colours, but
there’s actually a lot to consider. If there are photos in your app, are
they going to look their best in dark mode? On each given page, is the
right content being highlighted when the colours are switched? Do users’
attention still flow the same way they did in the regular mode? How
does the setting page look? Should the setting page also be switched to
dark mode? It would look very weird, wouldn’t it? what about all the
sub-pages of the settings page? how about the keyboard? Do we change it
to the dark keyboard in iOS when in night mode? If you have a black
tab-bar, should it now suddenly be white? because if it stays black then
there would be no contrast, but if you turn it white, there’s a big
bright object at the bottom getting all the attention from the rest of
the screen, and that’s not really what you want.
What
if my users have sensitive eyes and can’t handle bright lights? Or it’s
very hard for them to read balck on white due to dyslexia? Both iOS and
Android have very thorough accessibility features to accomodate the
whole experience for them. Having those settings on an app-by-app basis
would be confusing and inconsistent. There are options to reduce white
points, invert colours without inverting the photos, greyscale, adding a
tint, and options for different kinds of colour blindness built into
the system. So these don’t become an excuse for you to add a night mode
to your app.
OK. So there are many reasons why someone shouldn’t add a night mode to their app. But is there a good time to add a night mode? Yes.
It
all depends on the context — the type of content or service you are
providing your users and the context in which the users use your app.
The main complaint around the lack of night mode is prolonged reading at
night in a dark environment, mostly in bed or while in a car.
If your app is a game, then don’t bother.
If
it’s a productivity app, it’s still a very hard no as changing the
colour of the tools and the layout in an app that users depend heavily
on might confuse them. Unless you know for a fact that your users are
for some reason only using your app in bed with the lights off, then for
their sake do not add a night mode.
If
your app is related to messaging, then it’s be best to optimize for the
Smart Invert feature and let the user control the dark mode from the
accessibility section in settings if they wish.
If
your app focuses on reading, *cough* Medium *cough*, then it’s a good
idea to provide options for your users to adjust the reading environment
to their comfort. A great example of this is the Reader mode in Safari.
If
your app is related to driving, like Google Maps or Podcasts, and might
stay open while a user is behind the wheel, it’s a good idea to add
automatic night mode so that it won’t distract the users while they’re
behind the wheel (can’t wait for self-driving cars).
I’ve
seen a lot of confusion and frustration from users and designers
surrounding night mode and if it should be a system-wide feature or not.
I hope this article made it a bit clearer if you should or shouldn’t
add a night mode to your app. Happy designing! ❤️
Rumu
is a very unique game, and of all the games on this list, I think it’s
the one that has the most unique UI. This is most likely due to the fact
that Rumu has pioneered the ‘Sentient Vaccuum Cleaner’ genre, and
there’s simply no game similar enough to pull inspiration from. Because
of this, I’ll briefly summarise the elements I liked the most, so you
have an idea of what I’m talking about.
It’s
fitting, then, that Rumu’s UI pulls from a number of different genres
and also remains quite unique. Rumu (The titular vacuum cleaner himself)
has a radial menu to manage it’s quest log and inventory. That’s about
where the traditional UI ends, and you start to see some bespoke
elements.
Tutorial
tips for controls appear outside the environments. This is a nice
detail, as it serves not only to communicate the key bind but also as a
hint of what you’re supposed to do in any given space.
A
similar method is used for doorways or vent spaces — each is earmarked
with text or iconography to indicate whether the player can pass
through. The difference is actually really important, because it serves
to split how the player treats information throughout the game — if the
information is inside the room, it’s something to be learned. If it
exists outside of the game space, it’s something that little Rumu
already knows.
There’s
a ‘Datavision’ function that allows Rumu to see how the various smart
devices and intractable objects connect. It’s a great way to declutter
the environments when the player is being task oriented, and it also
often hides hidden easter eggs or gadgets.
One
of the smartest UX features of Rumu is how it uses it’s palette and art
style to generate emotion. A clean, white kitchen feels calm and
simple, while crawling through vents on a sinister dark background gives
the game a sense of urgency and danger.
Rumu
is beautiful, functional, unique, and incredibly evocative. It’s UX
blends perfectly with the narrative of the game, and aids in the
storytelling.
Conclusion: Independent
developers are constantly coming up with new, interesting ways to
interact with their games. There’s even a few on this list: Hand of Fate
2 and Tooth of Tail both innovate in a well-trodden genre.
Rumu’s
a little different, because the robot vacuum cleaner genre isn’t quite
as mature as, say, first person shooters. Despite this, the interactions
in Rumu feel natural; the spacial and diagetic elements are what I’d
expect a robo-vacuum to see in the world, and the meta UI tips help move
the player along without breaking the (sometimes literal) fourth wall.
I look forward to seeing the robot vacuum cleaner genre evolve.
Worst: Stationeers
Picking
this game sparked an internal debate in my mind over having a ‘Worst’
section at all, but in the end I decided it’s always better to get your
feelings out than internalise them.
I
really enjoyed Stationeers; I played almost six hours straight in my
first run through. It’s an incredibly complex space space station
construction game. Most of it’s UI is inoffensive: a simple HUD with
your vitals and atmosphere stats, and a slot-based inventory system.
It
all falls apart for me in the item management. Rather than go into
specifics, I’ll give you an example: I need to take the empty battery
out of my welding torch, and replace it with a full one.
I
have to press 5 to open my tool belt, use the scroll wheel to highlight
the torch, press F to put it in my hand, press R to open the torch’s
inventory, press E to change hands, press F to move the batter into my
free hand.
Now
I press 2 to open my suit inventory, scroll wheel to an empty slot,
press F to place the flat batter in there. Scroll wheel to the full
battery, press F to place it in my off hand. Press E to change hands.
Press R to open the torch inventory. Press E to change hands. Press F to
place the battery in.
That’s…15 key presses. I can see what they were going for with this system, but there’s got to be a better way.
Virtual Reality
Best: Lone Echo
If
UX as a practice is still in it’s infancy, UX for VR is a single-celled
organism attempting mitosis for the first time. Nobody really has any
idea what’s going to work and what’s not going to work, and so many
games have great executions with a poor UX.
Lone
Echo feels like someone looking at what VR will be doing five years
from now, and dragged it screaming back into 2017. I don’t think it’s
hyperbole to say that Lone Echo’s UX will help define the future of
virtual and augmented reality interfaces.
There’s
no HUD in Lone Echo, instead opting to have your UI displayed from
various arm-mounted gadgetry. Jack, the player character, has a number
of controls and panels along his suit, each of which the player can
interact with to reveal various elements interfaces.
This
actually annoyed me at first — I wasn’t sure why a robot need any sort
of interface at all. However, the interactions available are just so
neat and genuinely enjoyable, it becomes a very small nitpick. You will
also witness other characters in the game use the same interface, which
gives some internal consistency to the game.
Talking
to someone, for example, is a matter of simply looking at them and
tapping a button the controller. This spawns a list of dialogue options
that you select with your finger. It’s a simple thing, but being able to
quickly interact with the object your looking at feels great.
Any
panels you summon are intractable with your hand. You can scroll and
tap like you would on an iPad. It feels completely natural to work with,
and there were very few times after the opening minutes where I had
trouble with this interaction style.
Similarly,
Jack’s wrist holds a number of functions and features that are
activated using your opposite hand. Slide across your forearm to open
your objectives. Tap the top of your wrist for your scanner, or the side
of your wrist for your welder. The interactions are so second-nature
after having used them a few times that I found myself not even looking
at my hands as I did these simple tasks.
Most
of what you see in Lone Echo comes from somewhere. The locomotion, the
dialogues, the tool interactions, are all borrowed from games that have
come before it. Lone Echo proves that these interactions are
unequivocally the right way to
do them, and if done right, can be so immersive and intuitive that the
player doesn’t have to remember them, they just become the way things are done.
Just like the brilliant writing and slick graphics, Lone Echo’s UX is the reason it’s
such a successful game. It keeps the player completely immersed in
everything they’re doing, no matter how complex the task. At it’s best,
the interactions in Lone Echo are actually fun to use. Menus that are fun! If that’s not a revolution, I don’t know what is.
Conclusion: The
most immersive experience I’ve ever had in a video game. Lone Echo
bends over backwards to put you in the moment with objects that behave
like the user expects they should, and an environment that is
consistently interactive.
Lone Echo isn’t held back by trying to
fit it’s UI into it’s narrative — it’s built it’s entire user
experience around the narrative, instead. Lone Echo sets the standard
for VR UX to come.
Worst: None
It’s
a cop out, I know. Truth be told, I haven’t played a VR game that
released in 2017 that had any truly awful UX. There’s plenty of games
that make some missteps, or the occasional obvious error, but this is
going to happen with a still-growing genre like virtual reality. For
now, VR gets a pass.
If
you got this far, thanks for reading! Hopefully you found something
interesting in my choices. Please feel free to comment with your
opinions, especially if there’s something great that I missed.
Hardik Gandhi is Master of Computer science,blogger,developer,SEO provider,Motivator and writes a Gujarati and Programming books and Advicer of career and all type of guidance.