Programming & IT Tricks . Theme images by MichaelJay. Powered by Blogger.

Copyright

Facebook

Post Top Ad

Search This Blog

Post Top Ad

Responsive Ads Here

Archive

Post Top Ad

Contact


Editors Picks

Follow us

Post Top Ad

Fashion

Music

News

Sports

Food

Technology

Featured

Videos

Fashion

Technology

Fashion

Label

Translate

About

Translate

Sponsor

test

Weekly

Comments

Recent

Connect With us

Over 600,000+ Readers Get fresh content from FastBlog

About

Monday, January 22, 2018

Introducing ‘Total, built by hike’ — Android refined to bring the next billion people onto Data


India has a population of 1.3B with 750M unique mobile users. Approx 350M are feature phone users with another 400M being smartphone users. With the data growth over the last 18–24 months we’ve seen the data users grow tremendously to 200M users. However, a massive gap still remains. Why aren’t all 400M users using data?
We asked ourselves. Why are the rest of the people not in this 200 million bracket? There are still many challenges that exist in the country to bring the remaining 1B online.
On top of that the entire experience of someone coming online is a very tedious 15–20 step process:
It was clear that to bring the next billion people online we would have to do something radically new. Could we take this 15–20 step funnel and collapse it into a few steps?
We think we’ve done it. Today we’re announcing a brand new product called ‘Total, built by hike’.
‘Total, built by hike’ is a refined version of Android targeted at simplifying the smartphone and internet experience for the next billion people. ‘Total’ lets users access essential services such as Messaging, News, Recharge and more even without an active data connection and paves the way for them to get on Data by providing packs at as low as Re. 1.
It has 4 key elements to it that combined together make the end to end experience extremely seamless:

1. News, Cricket, Recharge, Wallet & More < 1MB

2. Single Login for All Services

3. Works Without Data

4. Total Data Packs Starting at Re 1. Over 50% Cheaper than Market.

Total Data Pack Pricing vs Market
Over the last 6 months, Hike has worked extremely closely with telecom partners to enable the USSD Technology & Data Upsell on Total. Airtel, Vodafone, Aircel & BSNL are the telecom partners which cover over 42% of the telecom market.

See How Total Works

Chat on Total Without Data
https://www.youtube.com/watch?v=DhWKMRAFyJ0
UPI on Total Without Data
https://www.youtube.com/watch?v=1Jspzf1uaXQ
Get Rail Info on Total Without Data
https://www.youtube.com/watch?v=Zpm0FoIDpGM

Devices — Intex & Karbonn

Intex and Karbonn are the first Smartphone partners that will carry ‘Total, built by Hike’. Total will be available on 4 models across Intex & Karbonn. The devices with Total are:
  • Intex Aqua Lions N1
  • Intex Aqua Lions T1
  • Intex Aqua Lions T1-Lite and
  • Karbonn A40 Indian.
The devices are scheduled to hit the shelves starting 1st March 2018 starting at a price point of Rs 3,000.

Closing Thoughts

We’re excited by the potential of Total, built by hike to impact millions of people in India as well as other developing nations. ‘Total, built by hike’ propels digital inclusion and furthers the national agenda of financial inclusion as well as socio-economic progress. It works well for service providers as it gives them access to millions of people who can use their services offline.
To be one of the first to experience Total, built by Hike,

A Eulogy for the Headphone Jack


Sometime in the mid-2000s, I was a freelance web developer in Philadelphia with some pretty crappy health insurance. I started having occasional heart palpitations, like skipped heart beats. My doctor said it was probably not serious, but she could do tests to rule out very unlikely potential complications for about $1,000. That seemed pretty expensive to rent a portable EKG for a single day, so I googled around for some schematics. Turned out you could build a basic three-lead EKG with about $5 worth of Radio Shack parts (I no longer have the exact schematic, but something like this). I didn’t really understand what the circuit did, but I followed the directions and soldered it together on some protoboard, connected a 9V battery, and used three pennies as electrodes that I taped to my chest. I hooked the output of the device to my laptop’s line in and pressed ‘record’.
screenshot of my heartbeat in Audacity
Audacity displayed the heartbeat signal live as it recorded. Sure enough, I was having pretty common/harmless Premature Ventricular Contractions. There’s one on the right side of the screenshot above.
Calling the 1/8th inch connectors you’d find on pretty much every piece of consumer electronics until recently “audio jacks” does them a disservice. It’s like calling your car a “grocery machine”. Headphone and microphone ports are, at their most basic, tools for reading and producing voltages precisely and rapidly over time.
My homemade EKG is a voltage converter. Electrodes attached to points around my heart measure tiny differences in voltage produced by signals that keep it beating. Those measured signals are amplified to about plus or minus 2 volts. That new voltage travels through an audio cable to the “Line In” on my sound card.
Sound cards happen to carry sound most of the time, but they are perfectly happy measuring any AC voltage from -2 to +2 volts at 48,000 times per second with 16 bits of accuracy. Put another way, your microphone jack measures the voltage on a wire (two wires for stereo) every 0.2 milliseconds, and records it as a value between 0 and 65,535. Your headphone jack does the opposite, by applying a voltage between -2 and +2 to a wire every 0.2 milliseconds, it creates a sound.

To any headphone jack, all audio is raw in the sense that it exists as a series of voltages that ultimately began as measurements by some tool, like a microphone or an electric guitar pickup or an EKG. There is no encryption or rights management, no special encoding or secret keys. It’s just data in the shape of the sound itself, as a record of voltages over time. When you play back a sound file, you feed that record of voltages to your headphone jack. It applies those voltages to, say, the coil in your speaker, which then pushes or pulls against a permanent magnet to move the air in the same way it originally moved the microphone whenever the sound was recorded.
Smartphone manufacturers are broadly eliminating headphone jacks going forward, replacing them with wireless headphones or BlueTooth. We’re going to all lose touch with something, and to me it feels like something important.
The series of voltages a headphone jack creates is immediately understandable and usable with the most basic tools. If you coil up some copper, and put a magnet in the middle, and then hook each side of the coil up to your phone’s headphone jack, it would make sounds. They would not be pleasant or loud, but they would be tangible and human-scale and understandable. It’s a part of your phone that can read and produce electrical vibrations.
Without that port, we will forever be beholden to device drivers between our sounds and our speakers. We’ll lose reliable access to an analog voltage we could use to drive any magnetic coil on earth, any pair of headphones. Instead, we’ll have to pay a toll, either through dongles or wireless headphones. It will be the end of a common interface for sound transfer that survived more or less unchanged for a century, the end of plugging your iPod into any stereo bought since WWII.
Entrepreneurs and engineers will lose access to a nearly universal, license-free I/O port. Independent headphone manufacturers will be forced into a dongle-bound second-class citizenry. Companies like Square — which made brilliant use of the headphone/microphone jack to produce credit card readers that are cheap enough to just give away for free — will be hit with extra licensing fees.
Because a voltage is just a voltage. Beyond an input range, nobody can define what you do with it. In the case of the Square magstripe reader, it is powered by the energy generally used to drive speakers (harvesting the energy of a sine wave being played over the headphones), and it transmits data to the microphone input.
There’s also the HiJack project, which makes this whole repurposing process open source and general purpose. They provide circuits that cost less than $3 to build that can harvest 7mW of power from a sound playing out of an iPhone’s headphone jack. Because you have raw access to some hardware that reads and writes voltages, you can layer an API on top of it to do anything you want, and it’s not licensable or limited by outside interests, just some reasonably basic analog electronics.
I don’t know exactly how losing direct access to our signals will harm us, but doesn’t it feel like it’s going to somehow? Like we may get so far removed from how our devices work, by licenses and DRM, dongles and adapters that we no longer even want to understand them? There’s beauty in the transformation of sound waves to electricity through a microphone, and then from electricity back to sound again through a speaker coil. It is pleasant to understand. Compare that to understanding, say, the latest BlueTooth API. One’s an arbitrary and fleeting manmade abstraction, the other a mysterious and dazzlingly convenient property of the natural world.

So, if you’re like me and you like headphone jacks, what can you do? Well, you could only buy phones that have them, which I think you’ll be able to do for a couple years. Vote with your dollar!
You can also tell companies that are getting rid of headphone jacks that you don’t like it. That your mother did not raise a fool. That aside from maybe water-resistance, there’s not a single good reason you can think of to give up your headphone jack. Tell them you see what they’re up to, and you don’t like it. You can say this part slightly deeper, through gritted teeth, if you get to say it aloud. Or, just italicize it so they know you are serious.

Sunday, January 21, 2018

Ten Thousand Followers


The amazing story of all you awesome people

There aren’t any good stock photos of “ten thousand”, so this piece will just have lots of kittens
I started writing a blog in May 2016, partly because I kept writing rants on Facebook that apparently were “too good not to be online somewhere”, and partly because I was bored after my Master’s degree and wanted something to do with my Sunday mornings.
Sleeping in, of course, was never an option.
This is Luna. Luna is my 6am alarm clock. Every. Single. Day
18 months later, and I’ve written about 100,000 words, been published in all sorts of places, and am now getting regular offers to pitch to major publications — more on this in the coming months.
And most importantly of all, I got to 10,000 followers. This time last year, it was 100 and about half of them were related to me.
All in all, it’s been a good year.
Pictured: Getting what you always wanted
So what’s in store for the Health Nerd? You’ll be happy to know that this year I’ve applied for a PhD with the University of Wollongong, which is actually super exciting and not scary like it feels to me sometimes. I’m also going to be — hopefully — releasing some episodes of a podcast that I’ve started with a brilliant co-host. The topic will be science in the media and I’m really excited to introduce all of you to my dulcet tones over the airwaves.
I’m so much less awkward than I am in text.
What does all of this activity mean to the blog? Nothing! I’ll still be aiming for my regular one health story a week on Medium, as well as an extra member’s-only article a month for all you subscribers who love that extra content.
Pictured: “Extra content”
To sum up, I’d just like to say thank you to you all. I’d never have made it here without all you brilliant people following me and making this all worthwhile. It was a fantastic 2017, and 2018 shows every sign of being brilliant as well.
I can’t wait to see what’s in store.

How to Access and Manage Your Voice Command Data


The past few years have been huge for voice-activated command services. It seems like every major tech manufacturer has one on the market.
Whether you’re using Amazon Alexa, Google Assistant, Apple’s Siri or Microsoft Cortana, it’s hard to ignore the hands-free revolution.
But how secure are your communications? Are your commands really kept between you and the device? Where is all of this information stored?
Answering these questions is critical when determining whether or not you want to use these personal assistants on a daily basis.
They’re also important when it comes time to access, manage or delete your command history.

Always On?

Contrary to popular belief, most of these devices aren’t “always on.” Instead, they only begin recording after the activation command, or “wake word,” has been issued.
In the case of Alexa and the Echo device, the command is “Alexa.” For Google Assistant, the phrase is “OK Google.” Some devices let you choose from several different wake words. Alexa users, for example, can choose between the default word of “Alexa” and one of two other options: “Amazon” or “Echo.”
So recording doesn’t begin until you’ve spoken the magic phrase. That might even be enough to put your mind at ease. But it’s important to note that these devices also store your voice commands in the cloud. As such, they’re prime targets for hackers.

Turning the Personal Assistant Into a Personal Wiretap

Like most new technology, these voice-activated command services are prone to cyber attacks.
The Amazon Echo device has already been hacked. Mark Barnes, a British security expert, recently demonstrated how malware can turn the consumer product into a live audio surveillance stream. You can find his research on his official blog.
Mark’s hack has a significant flaw — it requires access to the device and involves physical modification of the targeted hardware. Nonetheless, his proof-of-concept is enough to worry many consumers around the globe.

The Dawn of BlueBorne

Another hack — one which uses a Bluetooth exploit — was identified in late 2017.
Known as BlueBorne, this attack doesn’t involve physical access to any device. It doesn’t even require the end-user to click on any links or open any files.
As such, BlueBorne has the potential to cause widespread havoc among current users of devices like the Amazon Echo and Google Home.
BlueBorne only gets worse from there. The exploit also has the potential to take control of a remote device and infect every other device on the same network.
As if that wasn’t enough, most modern malware or antivirus programs wouldn’t even detect the attack.
Thankfully, the majority of these devices have already received firmware updates to patch the hole. According to recent estimates, there are still approximately 20 million devices — primarily Amazon Echo and Google Home products — that are susceptible.

Controlling and Managing Your Files

Most devices let you easily manage your files. Amazon Echo allows you to delete individual recordings by navigating into your device settings and your history folder.
From there, just tap on a single item and hit “Delete voice recordings” to finalize the action. To delete everything, sign into your Amazon Echo account at Amazon’s official website and navigate to “Manage Voice Recordings.”
Although Apple recently disclosed that Siri stores data for up to 18 months, you can turn off voice dictation and Siri’s assistance to prevent voice recording and archival.
Google Home users can delete past recordings by navigating to the “My Activity” section of their Google account. Just like Amazon Echo, Google Home lets you remove individual or entire groups of files.

Using Voice Command Services Safely and Securely

While devices like Amazon Echo and Google Home represent huge leaps forward in smart home technology, they’re not without their faults.
To use these products safely and securely, make sure you update them with the latest patches and enhancements.
Deleting your past messages is a good practice to minimize the damage if a hack does occur, but this can also hamper the personalization of your device.
Ultimately, it comes down to balancing your security and privacy with the amount of functionality you need.

How you can build your own VR headset for $100


My name is Maxime Coutté. I’m 16 and I built my own VR headset with my best friends, Jonas Ceccon and Gabriel Combe. And it ended up costing us $100.
I started programming when I was 13, thanks to my math teacher. Every Monday and Tuesday, my friends and I used to go to his classroom to learn and practice instead of having a meal at the cafeteria.
I spent one year building a very basic 8-bit OS from scratch and competing in robotics contests with my friends.
I then got interested in VR and with my friends we agreed that it would be really cool to create our own world in VR where we could spend time after school. But facing the fact that an Oculus was $700 at the time, we decided to build our own headset.
3D printed parts of the headset

Making VR accessible to everyone?

DARROW; J. R. EYERMAN/THE LIFE PICTURE COLLECTION/GETTY IMAGES
It was because of an anime called Sword Art Online where the main character is in a virtual reality RPG that I fell in love with VR. I wanted to understand every aspect of it.
I bought the cheapest components I could and we started by learning the very basics of the physics and math behind VR (proper acceleration, antiderivatives, quaternions…). And then we re-invented VR. I wrote WRMHL, and then FastVR with Gabriel. Putting all of this together, we ended up with a $100 VR headset.

A fully hackable VR headset and development kit

To speed up VR development time, we built FastVR, an open-source SDK for developers that is easy to understand and customize. It works like this:
  • The core headset computes the position of the headset in space;
  • The position is sent from the headset to WRMHL, and part of the CPU’s power is dedicated to reading those messages;
  • Then FastVR retrieves the data and uses them to render the VR game.
Everything you need to build the headset has been open-sourced and can be hacked.

Why open source?

I want to make VR mainstream. So I reached out to Oussama Ammar, one of the co-founders at The Family. I talked to him about setting up a company and launching a Kickstarter.
But he convinced me that for now, it’s better to wait on starting a business, to keep meeting others who have the same goals, and to keep learning.
We took a trip to Silicon Valley and Oussama introduced me to the chief architect at Oculus, Atman Brinstock. And they gave me some precious advice: make all of this open source.

The Next Step?

There are still a lot of technical points that we want to improve.
Our big focus right now is on a standalone VR headset, which we already have as a simple version, and cheaper 3D tracking.
All of this will be released soon.

How do I get started?

If you want to learn more about the technical side and build your headset, just follow the guide by clicking here. Star the repo if you liked it ⭐️

Saturday, January 20, 2018

How to Design Social Systems (Without Causing Depression and War)




How to Design Social Systems (Without Causing Depression and War)

Here I’ll present a way to think about social systems, meaningful interactions, and human values that brings these often-hazy concepts into focus. It’s also, in a sense, an essay on human nature. It’s organized in three sections:
  • Reflection and Experimentation. How do people decide which values to bring to a situation?
  • Practice Spaces. Can we look at social systems and see which values they support and which they undermine?
  • Sharing Wisdom. What are the meaningful conversations that we, as a culture, are starved for?
I’ll introduce these concepts and their implications for design. I will show how, applied to social media, they address issues like election manipulation, fake news, internet addiction, teen depression & suicide, and various threats to children. At the end of the post, I’ll discuss the challenges of doing this type of design at Facebook and in other technology teams.

Reflection and Experimentation

As I tried to make clear in my letter, meaningful interactions and time well spent are a matter of values. For each person, certain kinds of acts are meaningful, and certain ways of relating. Unless the software supports those acts and ways of relating, there will be a loss of meaning.
In the section below about practice spaces, I’ll cover how to design software that’s supportive in this way. But first, let’s talk about how people pick their values in the first place.
We often don’t know how we want to act, or relate, in a particular situation. Not immediately, at least.
When we approach an event (a conversation, a meeting, a morning, a task), there’s a process — mostly unconscious — by which we decide how we want to be.
Interrupting this can lead to doing things we regret. As we’ll see, it can lead to internet addiction, to bullying and trolling, and to the problems teens are having online.
So, we need to sort out the values with which we want to approach a situation. This is a process. I believe it’s the same process, whether you’re deciding something small — like how openly you will approach a particular conversation — or something big.
Let’s start with something big: many teenagers are engaged in sorting out their identities: they take ideas about how they ought to act (manly, feminine, polite, etc) and make up their own minds about whether to approach situations with these values in mind.
Worksheets from “On My Own Terms”. Join our community to play these games!
For these teens, settling on the right values takes a mix of experimentation and reflection. They need to try out different ways of being manly, feminine, intelligent, or kind in different situations and see how they work. They also need to reflect on who they want to be and how they want to live.
These two ingredients — experimentation and reflection — are required to sort out our values. Even the small decisions (for example, deciding how to balance honesty and tact in a conversation) require experimenting in real situations, and reflecting on what matters most.
This process can be intuitive, nonverbal, and unconscious, but it is vital.¹ If we don’t find the right values, it’s hard to feel good about what we do. The following circumstances interfere with experimentation and reflection:
  • High stakes. When deviation from norms becomes disastrous in some way — for instance, with very high reputational stakes — people are afraid to experiment. People need space to make mistakes and systems and social scenes with high consequences interfere with this.
  • Low agency. To put values to the test, a person needs discretion over the manner of their work: they need to experiment with moral values, aesthetic values, and other guiding ideas. Some environments — many of them corporate — make no room for being guided by one’s own moral or aesthetic ideas.
  • Disconnection. One way we judge the values we’re experimenting with is via exposure to their consequences. We all need to know how others feel when we treat them one way or another, to help us decide how we want to treat them. Similarly, an architect needs to know what it’s like to live in the buildings she designs. When the consequences of our actions are hidden, we can’t sort out what’s important.²
  • Distraction and overwork. We also lose the capacity to sort out our values when reflection becomes impossible. This is the major cost of noisy environments, infinite entertainment, push notifications, and some types of poverty.
  • Lack of faith in reflection. Finally, people can come to consider reflection to be useless — or to be avoided — even though it is so natural. The emotions which trigger reflection, including doubt and confusion, can be brushed away as distractions. One way this happens, is if people view their choices through a behaviorist lens: as determined by habits, reinforcement learning, or permanent drives.³ This makes it seem like people don’t have values at all, only habits, tastes, and goals. Experimentation and reflection seem useless.
Software-based social spaces can be disastrous for experimentation and reflection.
One reason that private group messaging (like WhatsApp and Messenger) is replacing virality-based forums (like Twitter, News Feed, and increasingly, Stories) is that the latter are horrible for experimenting with who we are. The stakes are too high. They seem especially bad for women, for teens, and for celebrities—which may partly explain the rise in teen suicide—but they're bad for all of us.
A related problem is online bullying, trolling, and political outrage. Many bullies and trolls would embrace other values if they had a chance to reflect and were better exposed to consequences. In-person spaces are much better for this.
Reflection can be encouraged or discouraged by design — this much is clear from the variety of internet-use helpers, like Moment and Intent. All of us (not just bullies and trolls) would use the Internet differently if we had more room for reflection.
Two lockscreens: one design encourages reflection, and one doesn’t. [from “Empowering Design”]

Exercise: On My Own Terms

In order to learn to support users in experimentation and reflection, designers must experiment and reflect on their own values. On My Own Terms is an exercise for this. Players fill out a worksheet, then socialize in an experimental way.
“On My Own Terms”. Join our community to play these games!
In the experimentation part, players defy norms they’ve previously obeyed, and see how it works out. Often they find that people like them better when they are less conventional — even when they are rude!

Here’s one thing this game makes clear: we discover what’s important to us in the context of real choices and their consequences. People often think they have certain values (eating kale, recycling, supporting the troops) but when they experiment and reflect on real choices, these values are discarded. They thought they believed in them, but only out of context.
This is how it was for me with consistency, rationality, masculinity, and being understated. When I played On My Own Terms, I decided to value these less. My true values are only clear through experimentation and reflection.
For users to have meaningful interactions and feel their time was well spent, they need to approach situations in a way they believe in. They need space to experiment and reflect.
But this is not enough.

Practice Spaces

Every social system makes some values easier to practice, and other values harder. Even with our values in order, a social environment can undermine our plans.
Most social platforms are designed in a way that encourages us to act against our values: less humbly, less honestly, less thoughtfully, and so on. Using these platforms while sticking to our values would mean constantly fighting their design. Unless we’re prepared for a fight, we’ll likely regret our choices.
There’s a way to address this, but it requires a radical change in how we design: we must reimagine social systems as practice spaces for the users’ values — as virtual places custom built to make it easier for the user to relate and to act in accord with their values.
Designers must get curious about two things:
  1. When users want to relate according to a particular value, what is hard about doing that?
  2. What is it about some social spaces that can make relating in this way easier?
For example, if an Instagram user valued being creative, being honest, or connecting adventurously, then designers would need to ask: what kinds of social environments make it easier to be creative, to be honest, or to connect adventurously? They could make a list of places where people find these things easier: camping trips, open-mics, writing groups, and so on.
Next, the designers would ask: which features of these environments make them good at this? For instance, when someone is trying to be creative, do mechanisms for showing relative status (like follower counts) help or hurt? How about when someone wants to connect adventurously? Or, with being creative, is this easier in a small group of close connections, or a large group of distant ones? And so on.
To take another example, if a News Feed user believes in being open-minded, designers would ask which social environments make this easier. Having made such a list, they would look for common features. Perhaps it’s easier to be open-minded when you remember something you respect about a person’s previous views. Or, perhaps it’s easier when you can tell if the person is in a thoughtful mood by reading their body language. Is open-mindedness more natural when those speaking have to explicitly yield time for others to respond? Designers would have to find out.

Exercise: Space Jam

To start thinking this way, it’s best if designers focus first on values which they themselves have trouble practicing. In this game, Space Jam, each player shares something they’d like to practice, some way of interacting. Then everyone brainstorms, imagining practice spaces (both online and offline) which could make this easier.
“Space Jam”. Join our community to play these games!
Here’s an example of the game, played over Skype with three designers from Facebook:
Eva says she wants to practice “changing the subject when a conversation seems like a dead end.”
Someone comments that Facebook threads are especially bad at this. We set a timer for three minutes and brainstorm on our own. Then everyone presents one real-world way to practice, and one mediated way.
George’s idea involves a timer. When it rings, everyone says “this conversation doesn’t meet my need for ____”. Jennifer suggests something else: putting a bowl in the middle of a conversation. Player can write out alternate topics and put them in the bowl in a conspicuous but non-interrupting way. (Jennifer also applies this idea to Facebook comments, where the bowl is replaced by a sidebar.)
We all wonder together: could it ever be “okay” for people to say things like “this conversation doesn’t meet my need for ____”? Under what circumstances is this safe to say?
This leads to new ideas.

In the story above, Eva is an honest person. But that doesn’t mean it’s always easy to be honest. She struggles to be honest when she wants to change the conversation. By changing the social rules, we can make it easier for her to live according to her values.
Games like Space Jam show how much influence the rules of social spaces have over us, and how easy it is for thoughtful design to change those rules. Designers become more aware of the values around them and why they can be difficult to practice. They feel more responsible for the spaces they are creating. (Not just the spaces they make for users, but also in daily interactions with their colleagues). This gives them a fresh approach to design.
If designers learn this skill, they can support the broad diversity of users’ values. Then users will no longer have to fight the software to practice their values.

Sharing Wisdom

I hope the previous ideas—reflection, experimentation, and practice spaces—have given a sense for how to support meaningful actions. Let’s turn to the question of meaningful information and meaningful conversation.
We are having a problem in this area, too.
Amidst nonstop communication — a torrent of articles, videos, and posts — there is still a kind of conversation that people are starved for, because our platforms aren’t built for it.
When this type of conversation — which I’ll call sharing wisdom — is missing, people feel that no one understands or cares about what’s important to them. People feel their values are unheeded, unrecognized, and impossible to rally around.
As we’ll see, this situation is easy to exploit, and the media and fake news ecosystems have taken advantage. By looking at how this exploitation works, we can see how conversations become ideological and polarized, and how elections are manipulated.
But first, what do I mean by sharing wisdom?
Social conversation is often understood as telling stories, sharing feelings, or getting advice. But each of these can be seen as a way to discover values.
When we ask our friends for advice — if you look carefully — we aren’t often asking about what we should do. Instead, we’re asking them about what’s important in our situation. We’re asking for values which might be new to us. Humans constantly ask each other “what’s important?” — in a spouse, in a wine, in a programming language.
I’ll call this kind of conversation (both the questions and the answers) wisdom.
Wisdom, n. Information about another person’s hard-earned, personal values — what, through experimentation and reflection, they’ve come to believe is important for living.
Wisdom is what’s exchanged when best friends discuss their relationships or jobs, when we listen to stories told by grandmothers, church pastors, startup advisors, and so on.
It comes in many forms: mentorship, texts, rituals, games. We seek it naturally, and in normal conditions it is abundant.
For various reasons, the platforms are better for sharing other things (links, recommendations, family news) than for asking each other what’s important. So, on internet platforms, wisdom gets drowned out by other forms of discourse:
  • By ideology. Our personal values are easily eclipsed by ideological values (for instance, by values designed to promote business, a certain elite, or one side in a political fight). This is happening when posts about partisan politics make us lose track of our shared (or sharable) concerns, or when articles about productivity outpace our deeper life questions.
  • By scientism. Sometimes “hard data” or pseudo-scientific “models” are used to justify things that would be more appropriately understood as values. For instance, when neuroscience research is used to justify a style of leadership, our discourse about values suffers.
  • By bullshit. Many other kinds of social information can drown out wisdom. This includes various kinds of self-promotion; it includes celebrities giving advice for which they have no special experience; it includes news. Information that looks like wisdom can make it harder to locate actual, hard-earned wisdom.
For all these reasons, talk about personal values tends to evaporate from the social platforms, which is why people feel isolated. They don’t sense that their personal values are being understood.
In this state, it’s easy for sites like Breitbart, Huffington Post, Buzzfeed, or even Russia Today to capitalize on our feeling of disconnection. These networks leverage the difficulty of sharing wisdom, and the ease of sharing links. They make a person feel like they are sharing a personal value (like living in a rural town or supporting women), when actually they are sharing headlines that twist that value into a political and ideological tool.

Exercise: Value Sharing Circle

For designers to get clear about what wisdom sounds like, it can be helpful to have a value sharing circle. Each person shares one value which they have lived up to on the day they are playing, and one which they haven’t. Here’s a transcript from one of these circles:
There are twelve of us, seated for dinner. We eat in silence for what feels like a long time. Then, someone begins to speak. It’s Otto. He says he works at a cemetery. At 6am this morning, they called him. They needed him to carry a coffin during a funeral service. No one else could do it. So, he went. Otto says he lived up to his values of showing up and being reliable. But — he says — he was distracted during the service. He’s not sure he did a good job. He worries about the people who were mourning, whether they noticed his missteps, whether his lack of presence made the ritual less perfect for them. So, he didn’t live up his values of supporting the sense of ritual and honoring the dead.
In the course of such an evening, participants are exposed to values they’ve never thought about. That night, other people spoke of their attempts to be ready for adventure, be a vulnerable leader, and make parenthood an adventure.

Playing this makes the difference between true personal values and ideologies very clear. Notice how different these values are from the values of business. No one in the circle was particularly concerned with productivity, efficiency, or socio-economic status. No one was even concerned with happiness!
Social platforms could make it much easier to share our personal values (like small town living) directly, and to acknowledge one another and rally around them, without turning them into ideologies or articles.
This would do more to heal politics and media than any “fake news” initiative. To do it, designers will need to know what this kind of conversation sounds like, how to encourage it, and how to avoid drowning it out.

The Hardest Challenge

I’ve pointed out many challenges, but left out the big one. 😕
Only people with a particular mindset can do this type of design. It takes a new kind of empathy.
Empathy can mean understanding someone’s goals, or understanding someone’s feelings. And these are important.
But to build on these concepts — experimentation, reflection, wisdom, and practice spaces— a designer needs to see the experimental part of a person, the reflective part, the person’s desire for (and capacity for) wisdom, and what the person is practicing.
As with other types of empathy, learning this means growing as a person.
Why? Well, just as it’s hard to see others’ feelings when we repress our own, or hard to listen to another person’s grand ambitions unless we are comfortable with ours... it’s hard to get familiar with another person’s values unless we are first cozy with our own, and with all the conflicts we have about them.
This is why the exercises I’ve listed (and others, which I didn’t have space to include) are so important. Spreading this new kind of empathy is a huge cultural challenge.
But it’s the only way forward for tech.

Thanks for reading. (Here are the credits and footnotes.)
Please clap for this and the previous post!
And…

Interested for our works and services?
Get more of our update !