Programming & IT Tricks . Theme images by MichaelJay. Powered by Blogger.

Copyright

Facebook

Post Top Ad

Search This Blog

Post Top Ad

Responsive Ads Here

Archive

Post Top Ad

Contact


Editors Picks

Follow us

Post Top Ad

Fashion

Music

News

Sports

Food

Technology

Featured

Videos

Fashion

Technology

Fashion

Label

Translate

About

Translate

Sponsor

test

Weekly

Comments

Recent

Connect With us

Over 600,000+ Readers Get fresh content from FastBlog

About

Saturday, January 20, 2018

The Lies Facebook Tells Itself (and Us)


Mark Zuckerberg on a tractor in Blanchardville, Wis., in April
Mark Zuckerberg informed us a few days ago that he would be rewiring our information landscape. Posts from friends and family move up in the rankings; news and media fall off. He made a few oblique references as to why and assured us in an insipid 533-word blog post that the changes would mean that the 50 minutes users spend on the platform each day would be “time well spent.”
Anyone who has been even partially sentient over the past few years has noticed how we have become shrouded in our filter bubbles, secure like never before in the complacency of our convictions. This certainty in the righteousness of our own point of view makes us regard a neighbor with a yard sign the way a Capulet regards a Montague. It seems to me that we suddenly hate each other a whole lot more than we ever did before.
So it should come as no surprise that the place where filter bubbles are the thickest, where the self-satisfied certitude that comes from unchecked power is iron-clad, is at the headquarters of Facebook itself. This was brought home to me when I read an interview with the head of Facebook’s News Feed product, Adam Mosseri, by the savvy tech blogger Ben Thompson.
Mosseri, who has been at Facebook for nearly a decade (eons in Facebook chronos), was eager to explain to an interviewer why this change was rational, normal, good for humanity (the company counts one quarter of humanity as monthly active users). The interview was quite a get for Thompson, and he published it in near-verbatim format. In so doing, he laid bare just how removed from the rest of humanity Facebook management is, and how blissfully ignorant they are about the consequences of their actions.
I refined my outrage into five points Mosseri makes (down from 15 initially) that illustrate the degree to which Facebook executives live in a world of their own making where the rest of us are expected to comply.

#1 The changes are for our collective “well-being”

The most glaring assumption that jumps out of this interview (as well as official Facebook communiques) is that we are all asked to swallow Facebook’s incredibly vague gauge of “well-being,” or “meaningful social interaction.” In fact, these terms are sometimes tossed about interchangeably. (Zuckerberg uses “well-being” three times in his post.)
Excerpt from interview on Stratechery.com.
In the excerpt above, Mosseri implies that Facebook is doing this for our own mental health, and that it’s based on extensive research. Interactions = good. Passively consuming content = bad.
Aside from the disturbingly paternalistic assumptions therein, can I ask how Facebook defines well-being? And, since they have done such extensive research, can they share it with the public transparently? Mosseri’s answer: “We’ll certainly consider it…” (Facebook has a blog post that discusses a few of its conclusions here.)
To me, this strikes at the heart of the peril posed by Facebook: The platform has probably more power than any company has ever wielded over information (and perhaps even our well-being). And yet it engages in zero public debate about the changes it makes. It simply rolls them out. We are asked to buy Facebook’s version of meaningful, as in this Mosseri statement: “So if you and I had a back and forth conversation on a post from a Page, that would actually count as a meaningful social interaction.” Hence, it would get a higher rank in the algorithm, etc.
Is an exchange “meaningful”? I can think of plenty of Facebook exchanges that merely raised my blood pressure. These are sweeping categories. Facebook has placed itself as the imperious custodian of our well-being, but tells us nothing about how it cares for us. And do they care if it has side effects? Just ask independent journalists in Bolivia what happens when Facebook starts using them as guinea pigs in an experiment about their well-being: Their audience drops, the government’s ability to control public opinion increases. And when they complain to Facebook, they get an automated reply email.

#2 “This change actually has very little to do with false news…”

Mosseri actually said that. But that’s not as stunning as what came next: “I will say that the amount of attention on false news specifically and a number of other integrity issues, certainly caught us off guard in a number of ways and it’s certainly been something we’ve tried to respond responsibly [to].”
Let’s unpack this. For more than a year, Facebook has been under scrutiny because there has been a flood of outright fake and misleading “news” coursing through its pipes. As studies have shown, people share fake news on Facebook, often more than the real stuff. The Pope endorsed Donald Trump? That spreads on Facebook. People get pissed. When the senior leadership at Facebook says this caught them “off guard” I have to pick my jaw up off the floor. Inside the Facebook HQ, the filter bubble is thicker than a security blanket. They really believe that all they are doing is connecting people and fostering “meaningful interactions.” They are not playing Russian roulette with our democratic institutions or selling adds to people who want to burn Jews.
And this filter bubble is so impenetrable that they believe one minute that they have the power to manipulate our mood (they do) and are shocked the next when they get blowback for allowing people to manipulate our politics.
Then the last part: it’s “something we’ve tried to respond responsibly [to].” No, Facebook, you have not. The only responsible response after these revelations would be a massive overhaul of your system and a transparent conversation with the public and Congress about how your algorithm works. You have produced the information equivalent of a massive e.coli contamination. Instead, your response has been an under-funded effort to infuse fact-checking into the News Feed, and a 41% uptick in what you pay your lobbyists.

#3 “Does the scrutiny accelerate the process? It’s really hard to say.”

Yes, it does and no, it’s not. This statement is in response to Thompson’s question about the criticism Facebook has received in the past year over its distribution of fake and misleading news and whether that has prompted the company to assume greater responsibility over what its users see. Mosseri’s full response is here:
Excerpt from interview on Stratechery.com
Here’s another counterfactual: Do you think the revelations about years of sexual abuse, assault and downright rape in the workplace by powerful men (Harvey Weinstein, Matt Lauer, Charlie Rose, etc., etc.) have accelerated the conversation about women’s rights and equity in the workplace? I mean, it’s possible.
So let’s assume that Facebook continues to post $4.7 billion in net income each quarter and its stock rises another 40% percent over the next 12 months (market cap at this writing is $517 billion), and there is no public criticism about fake news, targeting voters, and so forth. Absent any external pressure, do you think that Zuckerberg and the rest of the boys in senior management (and Sheryl Sandberg) take it upon themselves to head to a sweat lodge to probe their souls about whether the way they are redrawing the map of our information economy is good for humanity? Sure, that’s likely.

#4 Does Facebook have any responsibility toward media companies?

It’s a great question posed by Thompson. And the answer confirms my worst fears.
Mosseri’s initial response is anodyne enough: “I think we have a number of responsibilities.” News stories are important to people, he says. But then, just as quickly, he contorts himself into a pretzel to explain why it’s also not the case: “…news is a minority of the media content on Facebook, and media is a minority of the overall content in the News Feed.” Ergo, it’s not that big of a responsibility.
Two major fallacies here. The first: If there is less quantity, then there is less importance. My five-year-old niece’s recent birthday was a big hit on Facebook, as I imagine many other birthdays were that day. So, that’s more important to the Facebook community (read: humanity) than the SNAFU alert sent to all the residents of Hawaii warning of an imminent missile attack? The numbers tell us it is.
The second: Reporting, writing and editing a news story of any import takes time, resources and skill. Hence, there will be many fewer of them than there are birthday posts. So if it’s a numbers game, news loses. This is what I’d call self-serving math.

#5 “… there’s understandably going to be a lot of anxiety…”

Here’s some more math: The Pew Research Center reports that 45% of Americans get news from Facebook, a percentage that has been increasing sharply. Why? Because that’s the product Facebook created. It designed itself for that.
As the algorithm tweaks fall into place, and news publishers stand by as their audience plummets, Mosseri concedes: “there’s understandably going to be a lot of anxiety … it’s always a set of trade offs, we do the best we can with the information at hand.” (You possess ALL the information, by the way.) These are not words of someone who sees the news media as partners but as pawns. A post is a post is a post.
But that’s not how this company has operated. Since it burst on the scene, not all that many years ago, it has dangled carrot after carrot in front of news media. Do your headlines this way and you’ll be rewarded. Hey, pivot to video! No, try our Instant Articles product (or else). And then, like Lucy yanking the football, it’s gone. Facebook has moved on.
The heart of the issue is that Facebook wields immense power and is subject to minimal accountability. Changes come when Zuckerberg decrees them. Yes, it’s a publicly traded company. Yes, Congress shall make no law … But the power is real and the accountability is not.
And with all this heft, and all this research, Facebook seems to understand so little about the news it serves up. Take for example this notion that commenting or reacting to news is what makes news valuable. Yes, that’s true some of the time, but it’s also false some of the time. Sometimes we read the news to be informed. To catch up. To be better citizens. Because I didn’t share or like an article about climate change doesn’t mean that I don’t care about climate change.
To treat the value of news purely through the lens of whether people have shared it or had “meaningful interactions” with other members of the Facebook “community” misses the value entirely.
And Dear Facebook, sharing and commenting on every piece of news is actually part of the problem: It is what has thrust news and journalism into this hyper-partisan shithole we’re in right now.
I only have one wish for Zuckerberg. In a few short years, he will be the father of a girl in her tweens. I can only assume that she, too, might become obsessed with the Instagram posts of her friends, whether they liked her pic, or that she might discover that everyone is hanging out without her. And it might drive her to tears. And then her wise parents will decide (unilaterally) that they need to limit her screen time to 30 minutes. It’s for her own well-being, after all.

Getting started is more important than being right


Starting design without the starting line

https://dribbble.com/Martin_Kundby
A really great lesson I have learnt is to adopt and adapt the ‘design process’ that we have been drummed with to everyday design and problem-solving. Not only as a student but as an Intern we are constantly reminded of design thinking and other processes that should be used. But are we ever reminded of when and how they are the most appropriate tool?
There’s a small dark area that no one really teaches you and it’s what to do when there is no user research before the project starts. The user-centred design approach is designing for real people and users, identifying a problem. But what if you are tasked with a problem when you have no real knowledge of those people or specific users? Sure, you could go and do research but why use all of that time when your solution could not have any value to the end user?
There are no rules on where and how to start, different projects require different needs and we should be taught to learn and adapt to these needs
I believe it’s because of the way in which we are taught these processes that we come to believe they are linear. Design is romanticised to be this all-knowing, ‘the user is everything’ golden process but in reality, these concepts and methods are flexible and should be used as tools to solve our problems, and not as linear processes. What needs to be emphasized more in the teaching of these approaches is that there are no rules on where and how to start. Different projects require different needs and we should be taught to learn and adapt to these needs.

Getting started with no starting line…

One method I have learnt to use when starting a project with no prior user research is by approaching it as a Sprint and adapting the tools in the method to the project needs.
When I say no user research, this does not mean I haven’t taken some time to become familiar with the project or the people I would be designing for. I mean I haven’t tested, interviewed or really got close to real users. I used Youtube…
Anyway, by becoming familiar with the topic and it’s users on the surface level you are able to start exploring the problem and thinking up certain hunches.
Once you have an initial understanding of the problem and potential users, I jump straight into storyboarding. These storyboards are assumed situations for different scenarios the user may encounter. I make sure to do two storyboards for each use case — One extreme (novice) to the other extreme (expert). I find this starts to highlight some potential problems as we start to visualise how user needs are arising.
From these needs, you can start to build more tangible questions in the form of How Might We’s to cluster and form a bigger problem statement to start a project with
This may seem like a pragmatic approach to starting a project but I believe it helps build concepts quicker to test with users and validate if the idea or solution is worth spending more time and money on to do user research. Just a really nice way to get yourself started when you feel overwhelmed that there has been no user research!

I used this approach in a recent project to build prototypes and validate an idea early at Bosch. Fail fast, learn faster you know?

I want to learn, design and write stuff. I’m currently an intern in the user experience team at Bosch Power Tools and an Industrial Design student at Loughborough University. Feel free to get in touch.


Friday, January 19, 2018

A Brief History of Cloud Computing


Threat Intel’s ‘History of…’ series will look at the origins and evolution of notable developments in cyber security.

What exactly is cloud computing? This is something that, no doubt, most people have wondered in recent times, as more and more of the services we use have migrated to the semi-mythical “cloud”.
One dictionary definition of cloud computing defines it as: “Internet-based computing in which large groups of remote servers are networked so as to allow sharing of data-processing tasks, centralized data storage, and online access to computer services or resources.” Users no longer need vast local services to access storage or carry out certain tasks, they can do it all “in the cloud”, which essentially means over the internet.
If we go back to the very beginning, we can trace cloud computing’s origins all the way back to the 1950s, and the concept of time-sharing. At that time, computers were both huge and hugely expensive, so not every company could afford to have one. To tackle this, users would “time-share” a computer. Basically, they would rent the right to use the computer’s computational power, and share the cost of running it. In a lot of ways, that remains the basic concept of cloud computing today.
In the 1970s, the creation of “virtual machines” took the time-share model to a new level. This development allowed multiple computing environments to be housed in one physical environment. This was a key development that made it possible for the cloud computing we know today to develop.
Professor Ramnath Chellappa is often credited with being the person who coined the term “cloud computing” in its modern context, at a lecture he delivered in 1997. He defined it then as a “computing paradigm where the boundaries of computing will be determined by economic rationale rather than technical limits alone.” However, some months before this, in 1996, a business plan created by a group of technologists at Compaq also used the term when discussing the “evolution” of computing. So, while the source of the expression might be in dispute, it is clear that the modern “cloud” was something that was being seriously thought about by those in the IT industry in the mid ’90s — 20 years ago.

Modern developments

In 2006, Amazon launched Amazon Web Services (AWS), which provided services such as computing and storage in the cloud. Back then, you could rent computer power or storage from Amazon by the hour. Nowadays, you can rent more than 70 services, including analytics, software and mobile services. Its S3 storage service holds reams of data and services millions of requests every second. Amazon Web Services is used by more than one million customers in 190 countries. Massive companies including Netflix and AOL don’t have their own data centers but exclusively use AWS. Its projected revenue for 2017 was $18 billion.
While the other major tech players, such as Microsoft Azure, did subsequently launch their own cloud offerings to compete with AWS, it dominates the cloud infrastructure market; according to recent reports, at the end of 2017 it held a 62 percent market share of the public cloud business, with Microsoft Azure holding 20 percent, and Google 12 percent. While AWS is still way ahead of its rivals in this space, it is interesting to note that its market share did drop since the previous year, while both Microsoft and Google’s market share grew.
While AWS dominates in the enterprise space, when it comes to consumers, they are probably most familiar with services like Dropbox, iCloud and Google Drive, which they use to store back-ups of photos, documents, and more. The increased use by people of mobile devices with smaller storage capacities increased the need for cloud-based storage among consumers. While they may lack understanding about what exactly the cloud is, it is likely that most consumers are using at least one cloud-based service. The cloud has allowed for the growth of the mobile economy, in many ways, allowing for the development of apps that may not have been possible in the absence of a cloud infrastructure.
In organizations, the numbers using cloud services is even larger. The Symantec ISTR 2017 showed that the average enterprise has 928 cloud apps in use, though many businesses don’t realize that their employees are actually using so many cloud services.
The growth of mobile devices led to an inevitable growth in cloud usage by consumers

Security concerns

However, while there are many advantages to cloud computing, and many reasons why companies and individuals use cloud services, it does present some security concerns. One of the appeals of information stored on the cloud is that it can be accessed remotely, however, if inadequate security protocols are in place, this is also one of its weaknesses. There have been many stories in the news about Amazon S3 buckets being left on the web unsecured and revealing personal information about people. However, as it seems unlikely that cloud computing is going anywhere, the answer to these kinds of issues is more likely to be improving people’s cyber security practices to ensure they protect data stored online with strong passwords and other forms of authentication, such as two-factor and encryption.
The adoption of cloud was almost inevitable in our hyper-connected world. The need for computing power and storage simply became too expensive and too much for many businesses and individuals to tackle, meaning they needed to farm out these tasks to cloud services. As the move to mobile continually escalates, and as the Internet of Things (IoT) continues to grow as a sector, cloud computing is set to continue its growth.
It may have started out as a marketing term, but cloud computing is an important reality in today’s IT world.
Check out the Security Response blog and follow Threat Intel on Twitter to keep up-to-date with the latest happenings in the world of threat intelligence and cybersecurity.
Like this story? Recommend it by hitting the heart button so others on Medium see it, and follow Threat Intel on Medium for more great content.

How video streaming works on the web: An introduction


Note: this article is an introduction to video streaming in JavaScript and is mostly targeted to web developers. A large part of the examples here make use of HTML and modern JavaScript (ES6).
 If you’re not sufficiently familiar with them, you may find it difficult to follow through, especially the code example.
Sorry in advance for that.

The need for a native video API

From the early to late 2000s, video playback on the web mostly relied on the flash plugin.
Screen warning that the user should install the flash plugin, at the place of a video
This was because at the time, there was no other mean to stream video on a browser. As a user, you had the choice between either installing third-party plugins like flash or silverlight, or not being able to play any video at all.
To fill that hole, the WHATWG began to work on a new version of the HTML standard including, between other things, video and audio playback natively (read here: without any plugin). This trend was even more accelerated following Apple stance on flash for its products.
This standard became what is now known as HTML5.
The HTML5 Logo. HTML5 would be changing the way video are streamed on web pages
Thus HTML5 brought, between other things, the <video> tag to the web.
This new tag allows you to link to a video directly from the HTML, much like a <img> tag would do for an image.
This is cool and all but from a media website’s perspective, using a simple img-like tag does not seem sufficient to replace our good ol' flash:
  • we might want to switch between multiple video qualities on-the-fly (like YouTube does) to avoid buffering issues
  • live streaming is another use case which looks really difficult to implement that way
  • and what about updating the audio language of the content based on user preferences while the content is streaming, like Netflix does?
Thankfully, all of those points can be answered natively on most browsers, thanks to what the HTML5 specification brought. This article will detail how today’s web does it.

The video tag

As said in the previous chapter, linking to a video in a page is pretty straightforward in HTML5. You just add a video tag in your page, with few attributes.
For example, you can just write:
This HTML will allow your page to stream some_video.mp4 directly on any browser that supports the corresponding codecs (and HTML5, of course).
Here is what it looks like:
Simple page corresponding to the previous HTML code
This video tag also provides various APIs to e.g. play, pause, seek or change the speed at which the video plays.
Those APIs are directly accessible through JavaScript:
However, most videos we see on the web today display much more complex behaviors than what this could allow. For example, switching between video qualities and live streaming would be unnecessarily difficult there.
YouTube displays some more complex usecases: quality switches subtitles a tightly controlled progressive-download of the video…
All those websites actually do still use the video tag. But instead of simply setting a video file in the src attribute, they make use of much more powerful web APIs, the Media Source Extensions.

The Media Source Extensions

The “Media Source Extensions” (more often shortened to just “MSE”) is a specification from the W3C that most browsers implement today. It was created to allow those complex media use cases directly with HTML and JavaScript.
Those “extensions” add the MediaSource object to JavaScript. As its name suggests, this will be the source of the video, or put more simply, this is the object representing our video’s data.
The video is here “pushed” to the MediaSource, which provides it to the web page
As written in the previous chapter, we still use the HTML5 video tag. Perhaps even more surprisingly, we still use its src attribute. Only this time, we're not adding a link to the video, we're adding a link to the MediaSource object.
You might be confused by this last sentence. We’re not talking about an URL here, we’re talking about an abstract concept from the JavaScript language, how can it be possible to refer to it as an URL on a video tag, which is defined in the HTML?
To allow this kind of use cases the W3C defined the URL.createObjectURL static method. This API allows to create an URL, which will actually refer not to a resource available online, but directly to a JavaScript object created on the client.
This is thus how a MediaSource is attached to a video tag:
And that’s it! Now you know how the streaming platforms play videos on the Web!
… Just kidding. So now we have the MediaSource, but what are we supposed to do with it?
The MSE specification doesn’t stop here. It also defines another concept, the SourceBuffers.

The Source Buffers

The video is not actually directly “pushed” into the MediaSource for playback, SourceBuffers are used for that.
A MediaSource contains one or multiple instances of those. Each being associated to a type of content.
To stay simple, let’s just say that we have only three possible types:
  • audio
  • video
  • both audio and video
In reality, a “type” is defined by its MIME type, which may also include information about the media codec(s) used
SourceBuffers are all linked to a single MediaSource and each will be used to add our video’s data to the HTML5 video tag directly in JavaScript.
As an example, a frequent use case is to have two source buffers on our MediaSource: one for the video data, and the other for the audio:
Relations between the video tag, the MediaSource, the SourceBuffers and the actual data
Separating video and audio allows to also manage them separately on the server-side. Doing so leads to several advantages as we will see later. This is how it works:
And voila!
We’re now able to manually add video and audio data dynamically to our video tag.

It’s now time to write about the audio and video data itself. In the previous example, you might have noticed that the audio and video data where in the mp4 format.
“mp4” is a
container format, it contains the concerned media data but also multiple metadata describing for example the start time and duration of the media contained in it.
The MSE specification does not dictate which format must be understood by the browser. For video data, the two most commons are mp4 and webm files. The former is pretty well-known by now, the latter is sponsored by Google and based on the perhaps more known matroska format (“.mkv” files).
Both are well-supported in most browsers.

Media Segments

Still, many questions are left unanswered here:
  • Do we have to wait for the whole content to be downloaded, to be able to push it to a SourceBuffer (and therefore to be able to play it)?
  • How do we switch between multiple qualities or languages?
  • How to even play live contents as the media isn’t yet finished?
In the example from the previous chapter, we had one file representing the whole audio and one file representing the whole video. This can be enough for really simple use cases, but not sufficient if you want to go into the complexities offered by most streaming websites (switching languages, qualities, playing live contents etc.).
What actually happens in the more advanced video players, is that video and audio data are splitted into multiple “segments”. These segments can come in various sizes, but they often represent between 2 to 10 seconds of content.
Artistic depiction of segments in a media file
All those video/audio segments then form the complete video/audio content. Those “chunks” of data add a whole new level of flexibility to our previous example: instead of pushing the whole content at once, we can just push progressively multiple segments.
Here is a simplified example:
This means that we also have those multiple segments on server-side. From the previous example, our server contains at least the following files:
./audio/
  ├── segment0.mp4
  ├── segment1.mp4
  └── segment2.mp4
./video/
  └── segment0.mp4
Note: The audio or video files might not truly be segmented on the server-side, the Range HTTP header might be used instead by the client to obtain those files segmented (or really, the server might do whatever it wants with your request to give you back segments).
However these cases are implementation details. We will here always consider that we have segments on the server-side.
All of this means that we thankfully do not have to wait for the whole audio or video content to be downloaded to begin playback. We often just need the first segment of each.
Of course, most players do not do this logic by hand for each video and audio segments like we did here, but they follow the same idea: downloading sequentially segments and pushing them into the source buffer.
A funny way to see this logic happen in real life can be to open the network monitor on Firefox/Chrome/Edge (on linux or windows type “Ctrl+Shift+i” and go to the “Network” tab, on Mac it should be Cmd+Alt+i then “Network”) and then launching a video in your favorite streaming website.
You should see various video and audio segments being downloaded at a quick pace:
Screenshot of the Chrome Network tab on the Rx-Player’s demo page
By the way, you might have noticed that our segments are just pushed into the source buffers without indicating WHERE, in terms of position in time, it should be pushed.
The segments’ containers do in fact define, between other things, the time where they should be put in the whole media. This way, we do not have to synchronize it at hand in JavaScript.

Adaptive Streaming

Many video players have an “auto quality” feature, where the quality is automatically chosen depending on the user’s network and processing capabilities.
This is a central concern of a web player called adaptive streaming.
YouTube “Quality” setting. The default “Auto” mode follows adaptive streaming principles
This behavior is also enabled thanks to the concept of media segments.
On the server-side, the segments are actually encoded in multiple qualities. For example, our server could have the following files stored:
./audio/
  ├── ./128kbps/
  |     ├── segment0.mp4
  |     ├── segment1.mp4
  |     └── segment2.mp4
  └── ./320kbps/
        ├── segment0.mp4
        ├── segment1.mp4
        └── segment2.mp4
./video/
  ├── ./240p/
  |     ├── segment0.mp4
  |     ├── segment1.mp4
  |     └── segment2.mp4
  └── ./720p/
        ├── segment0.mp4
        ├── segment1.mp4
        └── segment2.mp4
A web player will then automatically choose the right segments to download as the network or CPU conditions change.
This is entirely done in JavaScript. For audio segments, it could for example look like that:
As you can see, we have no problem putting together segments of different qualities, everything is transparent on the JavaScript-side here. In any case, the container files contain enough information to allow this process to run smoothly.

Switching between languages

On more complex web video players, such as those on Netflix, Amazon Prime Video or MyCanal, it’s also possible to switch between multiple audio languages depending on the user settings.
Example of language options in Amazon Prime Video
Now that you know what you know, the way this feature is done should seem pretty simple to you.
Like for adaptive streaming we also have a multitude of segments on the server-side:
./audio/
  ├── ./esperanto/
  |     ├── segment0.mp4
  |     ├── segment1.mp4
  |     └── segment2.mp4
  └── ./french/
        ├── segment0.mp4
        ├── segment1.mp4
        └── segment2.mp4
./video/
  ├── segment0.mp4
  ├── segment1.mp4
  └── segment2.mp4
This time, the video player has to switch between language not based on the client’s capabilities, but on the user’s preference.
For audio segments, this is what the code could look like on the client:
You may also want to “clear” the previous SourceBuffer’s content when switching a language, to avoid mixing audio contents in multiple languages.
This is doable through the SourceBuffer.prototype.remove method, which takes a starting and ending time in seconds:
Of course, it’s also possible to combine both adaptive streaming and multiple languages. We could have our server organized as such:
./audio/
  ├── ./esperanto/
  |     ├── ./128kbps/
  |     |     ├── segment0.mp4
  |     |     ├── segment1.mp4
  |     |     └── segment2.mp4
  |     └── ./320kbps/
  |           ├── segment0.mp4
  |           ├── segment1.mp4
  |           └── segment2.mp4
  └── ./french/
        ├── ./128kbps/
        |     ├── segment0.mp4
        |     ├── segment1.mp4
        |     └── segment2.mp4
        └── ./320kbps/
              ├── segment0.mp4
              ├── segment1.mp4
              └── segment2.mp4
./video/
  ├── ./240p/
  |     ├── segment0.mp4
  |     ├── segment1.mp4
  |     └── segment2.mp4
  └── ./720p/
        ├── segment0.mp4
        ├── segment1.mp4
        └── segment2.mp4
And our client would have to manage both languages and network conditions instead:
As you can see, there’s now a lot of way the same content can be defined.
This uncovers another advantage separated video and audio segments have over whole files. With the latter, we would have to combine every possibilities on the server-side, which might take a lot more space:
segment0_video_240p_audio_esperanto_128kbps.mp4
segment0_video_240p_audio_esperanto_320kbps.mp4
segment0_video_240p_audio_french_128kbps.mp4
segment0_video_240p_audio_french_320kbps.mp4
segment0_video_720p_audio_esperanto_128kbps.mp4
segment0_video_720p_audio_esperanto_320kbps.mp4
segment0_video_720p_audio_french_128kbps.mp4
segment0_video_720p_audio_french_320kbps.mp4
segment1_video_240p_audio_esperanto_128kbps.mp4
segment1_video_240p_audio_esperanto_320kbps.mp4
segment1_video_240p_audio_french_128kbps.mp4
segment1_video_240p_audio_french_320kbps.mp4
segment1_video_720p_audio_esperanto_128kbps.mp4
segment1_video_720p_audio_esperanto_320kbps.mp4
segment1_video_720p_audio_french_128kbps.mp4
segment1_video_720p_audio_french_320kbps.mp4
segment2_video_240p_audio_esperanto_128kbps.mp4
segment2_video_240p_audio_esperanto_320kbps.mp4
segment2_video_240p_audio_french_128kbps.mp4
segment2_video_240p_audio_french_320kbps.mp4
segment2_video_720p_audio_esperanto_128kbps.mp4
segment2_video_720p_audio_esperanto_320kbps.mp4
segment2_video_720p_audio_french_128kbps.mp4
segment2_video_720p_audio_french_320kbps.mp4
Here we have more files, with a lot of redundancy (the exact same video data is included in multiple files).
This is as you can see highly inefficient on the server-side. But it is also a disadvantage on the client-side, as switching the audio language might lead you to also re-download the video with it (which has a high cost in bandwidth).

Live Contents

We didn’t talk about live streaming yet.
Live streaming on the web is becoming very common (twitch.tv, YouTube live streams…) and is again greatly simplified by the fact that our video and audio files are segmented.
Screenshot taken from twitch.tv, which specializes on video game live streaming
To explain how it basically works in the simplest way, let’s consider a YouTube channel which had just begun streaming 4 seconds ago.
If our segments are 2 seconds long, we should already have two audio segments and two video segments generated on YouTube’s server:
  • Two representing the content from 0 seconds to 2 seconds (1 audio + 1 video)
  • Two representing it from 2 seconds to 4 seconds (again 1 audio + 1 video)
./audio/
  ├── segment0s.mp4
  └── segment2s.mp4
./video/
  ├── segment0s.mp4
  └── segment2s.mp4
At 5 seconds, we didn’t have time to generate the next segment yet, so for now, the server has the exact same content available.
After 6 seconds, a new segment can be generated, we now have:
./audio/
  ├── segment0s.mp4
  ├── segment2s.mp4
  └── segment4s.mp4
./video/
  ├── segment0s.mp4
  ├── segment2s.mp4
  └── segment4s.mp4
This is pretty logical on the server-side, live contents are actually not really continuous, they are segmented like the non-live ones but segments continue to appear progressively as time evolves.
Now how can we know from JS what segments are available at a certain point in time on the server?
We might just use a clock on the client, and infer as time goes when new segments are becoming available on the server-side.
We would follow the “segmentX.mp4" naming scheme, and we would increment the “X” from the last downloaded one each time (segment0.mp4, then, 2 seconds later, Segment1.mp4 etc.).
In many cases however, this could become too imprecise: media segments may have variable durations, the server might have latencies when generating them, it might want to delete segments which are too old to save space…
As a client, you want to request the latest segments as soon as they are available while still avoiding requesting them too soon when they are not yet generated (which would lead to a 404 HTTP error).
This problem is usually resolved by using a transport protocol (also sometimes called Streaming Media Protocol).

Transport Protocols

Explaining in depth the different transport protocol may be too verbose for this article. Let’s just say that most of those have the same core concept: the Manifest.
A Manifest is a file describing which segments are available on the server.
Example of a DASH Manifest, based on XML
With it, you can describe most things we learn in this article:
  • Which audio languages the content is available in and where they are on the server (as in, “at which URL”)
  • The different audio and video qualities available
  • And of course, what segments are available, in the context of live streaming
The most common transport protocols used in a web context are:
  • DASH
    used by YouTube, Netflix or Amazon Prime Video (and many others). DASH’ manifest is called the Media Presentation Description (or MPD) and is at its base XML.
    The DASH specification has a great flexibility which allow MPDs to support most use cases (audio description, parental controls) and to be codec-agnostic.
  • HLS
    Developped by Apple, used by DailyMotion, Twitch.tv and many others. The HLS manifest is called the playlist and is in the m3u8 format (which are m3u playlist files, encoded in UTF-8).
  • Smooth Streaming
    Developped by Microsoft, used by multiple Microsoft products and MyCanal. In Smooth Streaming, manifests are called… Manifests and are XML-based.

In the real — web — world

As you can see, the core concepts behind videos on the web lays on media segments being pushed dynamically in JavaScript.
This behavior becomes quickly pretty complex, as there’s a lot of features a video player has to support:
  • it has to download and parse some sort of manifest file
  • it has to guess the current network conditions
  • it needs to register user preferences (for example, the preferred languages)
  • it has to know which segment to download depending on at least the two previous points
  • it has to manage a segment pipeline to download sequentially the right segments at the right time (downloading every segments at the same time would be inefficient: you need the earliest one sooner than the next one)
  • it has also to deal with subtitles, often entirely managed in JS
  • Some video players also manage a thumbnails track, which you can often see when hovering the progress bar
  • Many services also require DRM management
  • and many other things…
Still, at their core, complex web-compatible video players are all based on MediaSource and SourceBuffers.
Their web players all make use of MediaSources and SourceBuffers at their core
That’s why those tasks are usually performed by libraries, which do just that.
More often than not, those libraries do not even define a User Interface. They mostly provide a rich APIs, take the Manifest and various preferences as arguments, and push the right segment at the right time in the right source buffers.
This allows a greater modularization and flexibility when designing media websites and web application, which, by essence, will be complex front-ends.

Open-source web video players

There are many web video players available today doing pretty much what this article explains. Here are various open-source examples:
  • rx-player: Configurable player for both DASH and Smooth Streaming contents. Written in TypeScript — Shameless self-plug as I’m one of the dev.
  • dash.js: Play DASH contents, support a wide range of DASH features. Written by the DASH Industry Forum, a consortium promoting inter-operability guidelines for the DASH transport protocol.
  • hls.js: well-reputed HLS player. Used in production by multiple big names like Dailymotion, Canal+, Adult Swim, Twitter, VK and more.
  • shaka-player: DASH and HLS player. Maintained by Google.
By the way, Canal+ is hiring ! If working with that sort of stuff interests you, take a look at http://www.vousmeritezcanalplus.com/ (⚠️ French website).

Are there any unavoidable technologies?


Last night I was struggling to fall asleep. So I started to reflect on a documentary I had seen. It was dedicated to Nikola Tesla, the visionary inventor who was obsessed with electrical energy at the turn of the 19th and 20th centuries.
The story that made me reflect is the famous “currents war” (a movie version with Benedict Cumberbatch has just been released). Thomas Alva Edison argued that direct current was the ideal solution to “electrify” the world, and invested on it large sums. Tesla, who worked a few months for Edison, was instead convinced that alternating current was to be used.
I do not go into technical explanations. Let’s just say that Tesla, allying with Edison’s rival, the industrialist George Westinghouse, won it. Today we use alternating current (AC), but then transform it into continuous (DC) when we need to power our digital devices (or any other battery-powered object).
The question I asked myself was: if there were no Westinghouse and Tesla, would we have direct current distribution networks today?
Most likely not, because the advantages of AC distribution would still have emerged, and even rather soon.
More generally, the question is: are there unavoidable technologies?
Are there any alternative technological paths?
In the only case study available, that of human civilization, some discoveries and inventions, and the order with which they were made, seems to be obligatory: fire-> metals-> agriculture-> city-> wheel-> earthenware for example.
But also hunter-gatherer societies could have invented the wheel: it would have been very convenient for them, there was no reason not to have the idea and they had the ability to build it. Perhaps some tribes did so, using it for generations before memory was lost.
A sculpture of Göbekli Tepe -By Teomancimit — YüklÉ™yÉ™nin öz iÅŸi, CC BY-SA 3.0
Scholars think that to get to the monumental buildings, cities and civilizations we must go through the agriculture: the production surplus is able to support a large number of people and to give birth to social classes, as nobles and priests dispensed from manual work but able to “commission” great works.
The extraordinary discovery of the Göbekli Tepe temple — dating from around 9,500 BC — has however questioned the need for the transition to an urban society with social differentiations to create such buildings.
Another example. Sophisticated mechanisms such as those of clocks began to spread in the early Middle Ages, with the first specimens placed in church bell towers.
Why did not the Greeks or the Romans, so skilled in the practical arts, come to develop similar mechanisms? In fact, after the discovery of the Antikythera mechanism, a sophisticated astronomical calculator, we have seen how the capabilities (for example to have minimum tolerances) and the techniques to build high precision instruments existed. Probably social, economic and commercial structures more than technological limits did not allow to have Roman pendulum clocks. In the same way, having a lot of low-cost labor, the slaves, did not stimulate the invention of steam engines, if not some rare and simple system used for “special effects” in the temples.
A reconstruction Antikythera mechanism- Dave L via Flickr CC BY 2.0
With regard to the innovations of the last 120 years, it is important to underline, alas, the crucial importance of the two world wars, especially the second, for the acceleration of technological development; we only think of rocketry and computer science, born in that period, and electronics developed shortly after (and there was the Cold War …).
If there had not been World War II, what technologies would be surrounded by our daily life?
Probably we will be at the level of the 60s / 70s, with mainframes, first satellites in orbit, color televisions but with cathode ray tubes, first commercial jet planes, just in time production chains etc.
Perhaps an analog Internet would have developed, thanks to unpredictable developments in the amateur radio network hybridized to systems such as fax and video / audio cassettes.
Difficult to establish the timelines, life cycles of individual technologies, their interconnections and interdependencies.
In a complex system such as that of human society, small variations in the initial conditions can generate great changes in the trajectories and directions of the space of innovations.
As a last example we think of the web. Sir Timothy John Berners-Lee created it while working at CERN in 1990.
The web (or a similar one) could have been developed at least 10 years before, in one of the American universities already inter-connected with a telematic network.
This would have meant that the portals of the first web would have appeared at the end of the 80s, the web 2.0 around 1994, the social networks would have been established around 1997 and today … we can not know it. Also because there would have been a longer interval to have the mobile web, since in any case the evolution of mobile telephony would have followed its course as in our timeline. Or not?

This story is published in The Startup,

Thursday, January 18, 2018

Live TV has a new home on Fire TV


“Alexa, tune to HBO Family.”

We’ve all been there, the infinite scroll. Scrolling around with no idea what to watch. Good news for the indecisive folks in the room, with the new On Now row and Channel Guide on Fire TV, it’s easier than ever to watch Live TV with Amazon Channels.
Amazon Channels is the Prime benefit that lets Prime members subscribe to over 100 channels, with no cable required, no apps to download, and can cancel anytime. Most movies and TV shows included in your subscriptions are available to watch on demand. Some channels also feature Watch Live, which gives you the option to live stream programming on supported devices the same time that it’s broadcast on TV. That means you’ll be able to watch and live tweet Westworld when everyone else is watching.

On Now ✨

Here at Fire TV, we want to make it really easy to discover the live programming available to you. If you’re signed up for HBO, SHOWTIME, STARZ, or Cinemax through Amazon Channels, you will see a new row on your homepage called On Now. That row will show you all of the programming that is live now.

On Later ⏰

In addition to this handy dandy row, you will also have the ability to look into the future 🔮. If you’re curious what’s on later today or coming up in the next two weeks, you can use the new Channel Guide to browse the entire schedule. To launch the Guide, simply press the Options button (looks like a hamburger) on the Alexa Voice Remote while watching Live TV and see your channels and all the future programming information. Don’t forget to ️favorite ⭐️ your top channels so that they show up first in your Guide. Coming up this weekend, SHOWTIME Showcase will be airing Death Becomes Her and St. Elmo’s Fire; who needs weekend plans when two of the best movies are on?!

Just say — ”Alexa, watch HBO.” ðŸ—£️

If you already know what channel you want to watch — simply press the microphone button on your Alexa Voice Remote, or speak to your connected Echo device, and say “Alexa, watch ___”. The Live channel will instantly tune on command.
Here a few voice commands to try:
  • “Alexa, watch HBO.”
  • “Alexa, tune to HBO Family.”
  • “Alexa, go to Cinemax.”
  • “Alexa, go to SHOWTIME.”
  • “Alexa, watch STARZ.”
  • “Alexa, go to the Channel Guide.”
As always, you can ask Alexa to search for shows, movies, actors, genres and more. If you search for a show or movie that happens to be airing live, the channel will appear in the search results.
The new Live TV experience is currently available with subscriptions offered through Amazon Channels (HBO, SHOWTIME, STARZ, Cinemax) and we will be adding more channels in the near future. Start your free trial with these channels today to get started with Live TV on your Fire TV. This functionality is only available if you have an HBO, SHOWTIME, STARZ, or Cinemax subscription though Amazon Channels. If you access content from these providers through another method, you will not see an On Now row or the Channel Guide on your Fire TV. Please click here to learn more. Happy streaming!

Interested for our works and services?
Get more of our update !