Programming & IT Tricks . Theme images by MichaelJay. Powered by Blogger.

Copyright

Facebook

Post Top Ad

Search This Blog

Post Top Ad

Responsive Ads Here

Archive

Post Top Ad

Contact


Editors Picks

Follow us

Post Top Ad

Fashion

Music

News

Sports

Food

Technology

Featured

Videos

Fashion

Technology

Fashion

Label

Translate

About

Translate

Sponsor

test

Weekly

Comments

Recent

Connect With us

Over 600,000+ Readers Get fresh content from FastBlog

About

Showing posts with label security. Show all posts
Showing posts with label security. Show all posts

Tuesday, January 23, 2018

AI Innovation: Security and Privacy Challenges


To anyone working in technology (or, really, anyone on the Internet), the term “AI” is everywhere. Artificial intelligence — technically, machine learning — is finding application in virtually every industry on the planet, from medicine and finance to entertainment and law enforcement. As the Internet of Things (IoT) continues to expand, and the potential for blockchain becomes more widely realized, ML growth will occur through these areas as well.
While current technical constraints limit these models from reaching “general intelligence” capability, organizations continue to push the bounds of ML’s domain-specific applications, such as image recognition and natural language processing. Modern computing power (GPUs in particular) has contributed greatly to these recent developments — which is why it’s also worth noting that quantum computing will exponentialize this progress over the next several years.
Alongside enormous growth in this space, however, has been increased criticism; from conflating AI with machine learning to relying on those very buzzwords to attract large investments, many “innovators” in this space have drawn criticism from technologists as to the legitimacy of their contributions. Thankfully, there’s plenty of room — and, by extension, overlooked profit — for innovation with ML’s security and privacy challenges.

Reverse-Engineering

Machine learning models, much like any piece of software, are prone to theft and subsequent reverse-engineering. In late 2016, researchers at Cornell Tech, the Swiss Institute EPFL, and the University of North Carolina reverse-engineered a sophisticated Amazon AI by analyzing its responses to only a few thousand queries; their clone replicated the original model’s output with nearly perfect accuracy. The process is not difficult to execute, and once completed, hackers will have effectively “copied” the entire machine learning algorithm — which its creators presumably spent generously to develop.
The risk this poses will only continue to grow. In addition to the potentially massive financial costs of intellectual property theft, this vulnerability also poses threats to national security — especially as governments pour billions of dollars into autonomous weapon research.
While some researchers have suggested that increased model complexity is the best solution, there hasn’t been nearly enough open work done in this space; it’s a critical (albeit underpublicized) opportunity for innovation — all in defense of the multi-billion-dollar AI sector.

Adversarial “Injection”

Machine learning also faces the risk of adversarial “injection” — sending malicious data that disrupts a neural network’s functionality. Last year, for instance, researchers from four top universities confused image recognition systems by adding small stickers onto a photo, through what they termed Robust Physical Perturbation (RP2) attacks; the networks in question then misclassified the image. Another team at NYU showed a similar attack against a facial recognition system, which would allow a suspect individual to easily escape detection.
Not only is this attack a threat to the network itself (i.e. consider this against a self-driving car), but it’s also a threat to companies who outsource their AI development and risk contractors putting their own “backdoors” into the system. Jaime Blasco, Chief Scientist at security company AlienVault, points out that this risk will only increase as the world depends more and more on machine learning. What would happen, for instance, if these flaws persisted in military systems? Law enforcement cameras? Surgical robots?

Training Data Privacy

Protecting the training data put into machine learning models is yet another area that needs innovation. Currently, hackers can reverse-engineer user data out of machine learning models with relative ease. Since the bulk of a model’s training data is often personally identifiable information —e.g. with medicine and finance — this means anyone from an organized crime group to a business competitor can reap economic reward from such attacks.
As machine learning models move to the cloud (i.e. self-driving cars), this becomes even more complicated; at the same that users need to privately and securely send their data to the central network, the network needs to make sure it can trust the user’s data (so tokenizing the data via hashing, for instance, isn’t necessarily an option). We can once again abstract this challenge with everything from mobile phones to weapons systems.
Further, as organizations seek personal data for ML research, their clients might want to contribute to the work (e.g. improving cancer detection) without compromising their privacy (e.g. providing an excess of PII that just sits in a database). These two interests currently seem at odds — but they also aren’t receiving much focus, so we shouldn’t see this opposition as inherent. Smart redesign could easily mitigate these problems.

Conclusion

In short: it’s time some innovators in the AI space focused on its security and privacy issues. With the world increasingly dependent on these algorithms, there’s simply too much at stake — including a lot of money for those who address these challenges.

Wednesday, January 17, 2018

iOS 11 Password Problems


People appreciated how Apple addressed security. For decades, the company was building multi-layered ecosystem to secure its customers and protect its software and hardware systems from most online threats. Apple products do have some flaws (who doesn’t?) but overall its mobile systems were the most secure among all competitors.
Things have changed. Although iOS 11 brought us great SOS feature and the need to type in the passcode for establishing trust with new computers, it also introduced some questionable changes that will be described in this article.
The final goal of these changes was making it easier for users to operate their devices but each new small change caused a tradeoff in overall security.
Put together, these tradeoffs stripped all layers of protection off once secure ecosystem. The only security layer that is left in iOS 11 is the passcode. In case someone gets hold of your iPhone and manages to find out your passcode, you end up losing your Apple ID, your data files, all passwords to third-party web accounts, access to other Apple devices registered with that ID. It is possible to do even more bad things thanks to the fact that Apple removed all previous protection levels and left only the passcode in iOS 11.
The key problem:
In sensitive environments, it is not enough to secure only the front door of the building and leave all inner rooms without additional keys and checks. Sad, but it is exactly what happened to iOS. If you have a passcode, you may get everything else.
Bellow, you will see what attackers can do to user’s data if they have access to the device and passcode.
iTunes backup password
iPhone backups that are made with the help of iTunes can be safeguarded with a password. With each new version, Apple successfully increased backup passwords security addressing the growing threats coming from password breaking crooks.
All of a sudden, in iOS 11, Apple allows resetting that extremely secure password. Having the device and knowing the passcode, there is no need any more to break your head creating sophisticated attacks, you can just remove the backup password.
Before I tell you why this is so important, let me explain how it was implemented earlier. In iOS 8, 9, and 10 you could create a password in iTunes to secure your backups. You had to do it just once and all future backups on any of your numerous devices would stay protected with a password.
It is important that this password belonged to your Apple device and not the computer or iTunes. You were able to connect an iPhone to a different PC with a new copy of iTunes and male a backup. That backup would be safeguarded by the backup password you set previously, maybe very long time ago.
The iOS controlled all password changes and removal attempts. It required to provide your old password first. People who forgot their passwords had stuck with what they had or reset the device to factory settings thus losing all data.
That was really a secure way to handle passwords. But users wept, the police started to snivel, and the FBI started to complain. Apple decided to give up.
Pillaging backup passwords in iOS 11
Although you can still go to iTunes and get a backup password that cannot be later changed without the original one, this all means nothing because it is possible to completely remove the backup password from iOS.
Apple knowledge base says:
You can’t restore an encrypted backup without its password. You won’t be able to use previous encrypted backups, BUT you can back up your CURRENT data using iTunes and setting a new backup password.
Now for crooks to extract sensitive information from the device, they just need to make a new backup. They may create a temporary password 1234 for example for the new backup. Once it is ready, they may extract user data like credit card info, passwords, health data etc. Turning this information into readable format will require some forensic tools but they are widely available on the market.
While getting all those passwords, most probably you stumble upon the Google account password. With that in hands, you may access a whole lot of personal data. In case Google account has multi-factor authentication, the very iPhone in your hand (often) includes the tied SIM card.
Imagine hackers got control over an iPhone with the previous version of iOS. It is a win again because updating the iOS to version 11 is not a problem. Yes, iPhone 5 cannot run iOS 11 but good and old jailbreaking of 32-bit devices still allows to gain full physical control.
Again, this post implies crooks know the passcode. But if you grabbed your boss’s iPhone you can relatively easy brute-force the passcode with the help of numerous tools that are common these days.
Summarizing the above said, with iPhone and passcode, it is possible to get:
· Application data
· Local images and videos
· Passwords from local keychain
· Just everything located in a local backup
Is this just massive? Wait, it is just the begging. Next goes changing Apple ID password, disabling the iCloud lock, and locking or erasing other user’s devices remotely.
Apple ID password
With all other services I use, to change an account password, I need to provide my old password. Apple sees it differently. To reset Apple ID password (using the device) you need just to confirm the device passcode. It works for accounts with multi-factor authentication but again most probably your device has the necessary SIM.
Moving forward on our list, now you can also:
· Change the Apple ID password
· Deactivate iCloud lock and consequently reset iPhone using different account
· Get access to just everything stored in that iCloud account
· See on the map the actual location of other i-devices registered with the same account and remotely erase or lock those i-devices
· Change the phone number and begin receiving multi-factor codes to your SIM
So, in order to reset the Apple account and iCloud password, you need to go to Settings > Apple ID > Password & Security > Change Password. You will now have to enter the passcode and then you will be able to change the password for Apple ID and iCloud. It is that simple.
Next, you can change the Trusted Phone Number. Just add and confirm a new number and then remove the old one.
Getting into iCloud
Having reset the victim’s iCloud password together with adding your own phone number to receive 2FA codes, gives us access to everything the victim has on his Apple account. These are call logs, contact list, iCloud Keychain, photos taken with all other i-devices, iCloud backups, etc. And ICloud backups may contain tons of information as Apple allows to keep three recent backups per each device registered on one Apple ID.
Synced Data
Moreover, iCloud allows crooks to access information synced across all i-devices like browser passwords, bookmarks, browsing history (but not the VPN data), notes etc. In case the user also has a Mac, you can get his desktop files and documents.
iCloud KeyChain
To sync Safari passwords, payment info, and auth tokens, Apple uses a cloud service cold iCloud KeyChain. Once you change the iCloud password, you can download all then KeyChain data. Now you will be able to even see the old (original) victim’s password for his (now yours) Apple account. Additionally, you will have access to email account passwords and Wi-Fi passwords, and actually every password the victims typed in his browser.
Bottom line
iOS 11 breaks the delicate convenience/security balance moving heavily into user convenience side.
If an attacker
steals your iPhone and recovers the passcode, there will never be any extra layer of protection to secure your data. You will be completely exposed.
As the passcode is the only protection left, be sure to use all six digits allowed.
I hope Apple will fix this security issue.

Thursday, January 11, 2018

HTTPS explained with carrier pigeons


Cryptography can be a hard subject to understand. It’s full of mathematical proofs. But unless you are actually developing cryptographic systems, much of that complexity is not necessary to understand what is going on at a high level.
If you opened this article hoping to create the next HTTPS protocol, I’m sorry to say that pigeons won’t be enough. Otherwise, brew some coffee and enjoy the article.

Alice, Bob and … pigeons?

Any activity you do on the Internet (reading this article, buying stuff on Amazon, uploading cat pictures) comes down to sending and receiving messages to and from a server.
This can be a bit abstract so let’s imagine that those messages were delivered by carrier pigeons. I know that this may seem very arbitrary, but trust me HTTPS works the same way, albeit a lot faster.
Also instead of talking about servers, clients and hackers, we will talk about Alice, Bob and Mallory. If this isn’t your first time trying to understand cryptographic concepts you will recognize those names, because they are widely used in technical literature.

A first naive communication

If Alice wants to send a message to Bob, she attaches the message on the carrier pigeon’s leg and sends it to Bob. Bob receives the message, reads it and it’s all is good.
But what if Mallory intercepted Alice’s pigeon in flight and changed the message? Bob would have no way of knowing that the message that was sent by Alice was modified in transit.
This is how HTTP works. Pretty scary right? I wouldn’t send my bank credentials over HTTP and neither should you.

A secret code

Now what if Alice and Bob are very crafty. They agree that they will write their messages using a secret code. They will shift each letter by 3 positions in the alphabet. For example D → A, E → B, F → C. The plain text message “secret message” would be “pbzobq jbppxdb”.
Now if Mallory intercepts the pigeon he won’t be able to change the message into something meaningful nor understand what it says, because he doesn’t know the code. But Bob can simply apply the code in reverse and decrypt the message where A → D, B → E, C → F. The cipher text “pbzobq jbppxdb” would be decrypted back to “secret message”.
Success!
This is called symmetric key cryptography, because if you know how to encrypt a message you also know how to decrypt it.
The code I described above is commonly known as the Caesar cipher. In real life, we use fancier and more complex codes, but the main idea is the same.

How do we decide the key?

Symmetric key cryptography is very secure if no one apart from the sender and receiver know what key was used. In the Caesar cipher, the key is an offset of how many letters we shift each letter by. In our example we used an offset of 3, but could have also used 4 or 12.
The issue is that if Alice and Bob don’t meet before starting to send messages with the pigeon, they would have no way to establish a key securely. If they send the key in the message itself, Mallory would intercept the message and discover the key. This would allow Mallory to then read or change the message as he wishes before and after Alice and Bob start to encrypt their messages.
This is the typical example of a Man in the Middle Attack and the only way to avoid it is to change the encryption system all together.

Pigeons carrying boxes

So Alice and Bob come up with an even better system. When Alice wants to send Bob a message she will follow the procedure below:
  • Alice sends a pigeon to Bob without any message.
  • Bob sends the pigeon back carrying a box with an open locket, but keeping the key.
  • Alice puts the message in the box, closes the locks and sends the box to Bob.
  • Bob receives the box, opens it with the key and reads the message.
This way Mallory can’t change the message by intercepting the pigeon, because he doesn’t have the key. The same process is followed when Bob wants to send Alice a message.
Alice and Bob just used what is commonly known as asymmetric key cryptography. It’s called asymmetric, because even if you can encrypt a message (lock the box) you can’t decrypt it (open a closed box).
In technical speech the box is known as the public key and the key to open it is known as the private key.

How do I trust the box?

If you paid attention you may have noticed that we still have a problem. When Bob receives that open box how can he be sure that it came from Alice and that Mallory didn’t intercept the pigeon and changed the box with one he has the key to?
Alice decides that she will sign the box, this way when Bob receives the box he checks the signature and knows that it was Alice who sent the box.
Some of you may be thinking, how would Bob identify Alice’s signature in the first place? Good question. Alice and Bob had this problem too, so they decided that, instead of Alice signing the box, Ted will sign the box.
Who is Ted? Ted is a very famous, well known and trustworthy guy. Ted gave his signature to everyone and everybody trusts that he will only sign boxes for legitimate people.
Ted will only sign an Alice box if he’s sure that the one asking for the signature is Alice. So Mallory cannot get an Alice box signed by Ted on behalf of her as Bob will know that the box is a fraud because Ted only signs boxes for people after verifying their identity.
Ted in technical terms is commonly referred to as a Certification Authority and the browser you are reading this article with comes packaged with the signatures of various Certification Authorities.
So when you connect to a website for the first time you trust its box because you trust Ted and Ted tells you that the box is legitimate.

Boxes are heavy

Alice and Bob now have a reliable system to communicate, but they realize that pigeons carrying boxes are slower than the ones carrying only the message.
They decide that they will use the box method (asymmetric cryptography) only to choose a key to encrypt the message using symmetric cryptography with (remember the Caesar cipher?).
This way they get the best of both worlds. The reliability of asymmetric cryptography and the efficiency of symmetric cryptography.
In the real world there aren’t slow pigeons, but nonetheless encrypting messages using asymmetric cryptography is slower than using symmetric cryptography, so we only use it to exchange the encryption keys.
Now you know how HTTPS works and your coffee should also be ready. Go drink it you deserved it 😉

Saturday, January 6, 2018

What we learned about productivity from analyzing 225 million hours of working time in 2017


This post was originally published on the RescueTime blog. Check us out for more like it.
When exactly are we the most productive?
Thinking back on your last year, you probably have no idea. Days blend together. Months fly by. And another year turns over without any real understanding of how we actually spent our time.
Our mission at RescueTime has always been to help you do more meaningful work. And this starts with understanding how you spend your days, when you’re most productive, and what’s getting in your way.
In 2017, we logged over 225 million hours of digital time from hundreds of thousands of RescueTime users around the world.
By studying the anonymized data of how people spent their time on their computers and phones over the past 12 months, we’ve pinpointed exactly what days and times we do the most productive work, how often we’re getting distracted by emails or social media, and how much time a week we actually have to do meaningful work.
Key Takeaways:

What was the most (and least) productive day of 2017?

Simply put, our data shows that people were the most productive on November 14th. In fact, that entire week ranked as the most productive of the year.
Which makes sense. With American Thanksgiving the next week and the mad holiday rush shortly after, mid-November is a great time for people to cram in a few extra work hours and get caught up before gorging on Turkey dinner.
On the other side of the spectrum, we didn’t get a good start to the year. January 6th — the first Friday of the year — was the least productive day of 2017.

Now, what do we mean when we talk about the “most” or “least” productive days?

RescueTime is a tool that tracks how you spend your time on your computer and phone and let’s you categorize activities on a scale from very distracting to very productive. So for example, if you’re a writer, time spent in Microsoft Word or Google Docs is categorized as very productive while social media is very distracting.
From that data, we calculate your productivity pulse — a score out of 100 for how much of your time you spent on activities that you deem productive.
On November 14th, the average productivity pulse across all RescueTime users was a not-so-shabby 60.

How much of our day is spent working on a digital device?

One of the biggest mistakes so many of us make when planning out our days is to assume we have 8+ hours to do productive work. This couldn’t be further from the truth.
What we found is that, on average, we only spend 5 hours a day working on a digital device.
And with an average productivity pulse of 53% for the year, that means we only have 12.5 hours a week to do productive work.

What does the average “productive day” look like?

Understanding our overall productivity is a fun exercise, but our data lets us go even deeper.
Looking at the workday (from 8am–6pm, Monday to Friday), how are we spending our time? When do we do our best work? Do different tasks normally get done at different times?
Here’s what we found out:

Our most productive work happens on Wednesdays at 3pm

Our data showed that we do our most productive work (represented by the light blue blocks) between 10 and noon and then again from 2–5pm each day. However, breaking it down to the hour, we do our most productive work on Wednesdays at 3pm.
Light blue represents our most productive work

Email rules our mornings, but never really leaves us alone

Our days start with email, with Monday morning at 9am being the clear winner for most time spent on email during the week.
Light blue represents our busiest time for emails

Software developers don’t hit peak productivity until 2pm each day

What about how specific digital workers spend their days?
Looking at the time spent in software development tools, our data paints a picture of a workday that doesn’t get going until the late morning and peaks between 2–6pm daily.
Light blue represents when we’re using software development tools

While writers are more likely to be early birds

For those who spend their time writing, it’s a different story.
Writing apps were used more evenly throughout each day with the most productive writing time happening on Tuesdays at 10am.
Light blue represents when we’re using writing tools

What were the biggest digital distractions of 2017?

It’s great to pat ourselves on the back about how productive we were in 2017. But we live in a distracted world and one of our greatest challenges is to stay focused and on task.
Here’s what our research discovered about the biggest time wasters of last year:

On an average day we use 56 different apps and websites

Depending on what you do, this number might not seem that bad. However, when we look at how we use those different apps and websites, things get a bit hairier.
When it comes to switching between different apps and websites (i.e. multitasking), we jump from one task to another nearly 300 times per day and switch between documents and pages within a site 1,300 times per day.

For Slack users, 8.8% of our day is spent in the app

There’s been a lot of talk about how much email and communication eats into our days. But what do the numbers look like?
What we found is that for people who use Slack as their work communication tool, they spend almost 10% of their workday in the app (8.8% to be exact).

We check email or IM 40 times every day

What’s more telling is how often we check our communication tools, whether email or instant messengers like Slack or HipChat.
On average, we check our communication apps 40 times a day, or once every 7.5 minutes during our 5 hours of daily digital work time.

Almost 7% of every workday is spent on social media

I’m sure most of us try not to spend time on social media while at work. But our data showed that almost 7% of every workday was spent on social media.
It’s not only time spent that’s the issue, however. On average, we check in on social media sites 14 times per workday, or nearly 3 times an hour during our 5-hour digital day.

So, what does all this tell us about how we spend our days?
Well, first off, we need to remember that averages shouldn’t be treated as universal truths. Everyone works differently. But having a high-level look at productivity and the things that get in its way is a powerful tool in improving how you work.
The biggest piece of advice we can pull from all this data is to be aware of the limited time you have each day for meaningful work, and spend it wisely.
Our days are filled with distractions, and it’s up to us to protect what time we have.

Artificial Intelligence, AI in 2018 and beyond


Or how machine learning is evolving into AI
These are my opinions on where deep neural network and machine learning is headed in the larger field of artificial intelligence, and how we can get more and more sophisticated machines that can help us in our daily routines.
Please note that these are not predictions of forecasts, but more a detailed analysis of the trajectory of the fields, the trends and the technical needs we have to achieve useful artificial intelligence.
Not all machine learning is targeting artificial intelligences, and there are low-hanging fruits, which we will examine here also.

Goals

The goal of the field is to achieve human and super-human abilities in machines that can help us in every-day lives. Autonomous vehicles, smart homes, artificial assistants, security cameras are a first target. Home cooking and cleaning robots are a second target, together with surveillance drones and robots. Another one is assistants on mobile devices or always-on assistants. Another is full-time companion assistants that can hear and see what we experience in our life. One ultimate goal is a fully autonomous synthetic entity that can behave at or beyond human level performance in everyday tasks.
See more about these goals here, and here, and here.

Software

Software is defined here as neural networks architectures trained with an optimization algorithm to solve a specific task.
Today neural networks are the de-facto tool for learning to solve tasks that involve learning supervised to categorize from a large dataset.
But this is not artificial intelligence, which requires acting in the real world often learning without supervision and from experiences never seen before, often combining previous knowledge in disparate circumstances to solve the current challenge.
How do we get from the current neural networks to AI?
Neural network architectures  — when the field boomed, a few years back, we often said it had the advantage to learn the parameters of an algorithms automatically from data, and as such was superior to hand-crafted features. But we conveniently forgot to mention one little detail… the neural network architecture that is at the foundation of training to solve a specific task is not learned from data! In fact it is still designed by hand. Hand-crafted from experience, and it is currently one of the major limitations of the field. There is research in this direction: here and here (for example), but much more is needed. Neural network architectures are the fundamental core of learning algorithms. Even if our learning algorithms are capable of mastering a new task, if the neural network is not correct, they will not be able to. The problem on learning neural network architecture from data is that it currently takes too long to experiment with multiple architectures on a large dataset. One has to try training multiple architectures from scratch and see which one works best. Well this is exactly the time-consuming trial-and-error procedure we are using today! We ought to overcome this limitation and put more brain-power on this very important issue.
Unsupervised learning —we cannot always be there for our neural networks, guiding them at every stop of their lives and every experience. We cannot afford to correct them at every instance, and provide feedback on their performance. We have our lives to live! But that is exactly what we do today with supervised neural networks: we offer help at every instance to make them perform correctly. Instead humans learn from just a handful of examples, and can self-correct and learn more complex data in a continuous fashion. We have talked about unsupervised learning extensively here.
Predictive neural networks —  A major limitation of current neural networks is that they do not possess one of the most important features of human brains: their predictive power. One major theory about how the human brain work is by constantly making predictions: predictive coding. If you think about it, we experience it every day. As you lift an object that you thought was light but turned out heavy. It surprises you, because as you approached to pick it up, you have predicted how it was going to affect you and your body, or your environment in overall.
Prediction allows not only to understand the world, but also to know when we do not, and when we should learn. In fact we save information about things we do not know and surprise us, so next time they will not! And cognitive abilities are clearly linked to our attention mechanism in the brain: our innate ability to forego of 99.9% of our sensory inputs, only to focus on the very important data for our survival — where is the threat and where do we run to to avoid it. Or, in the modern world, where is my cell-phone as we walk out the door in a rush.
Building predictive neural networks is at the core of interacting with the real world, and acting in a complex environment. As such this is the core network for any work in reinforcement learning. See more below.
We have talked extensively about the topic of predictive neural networks, and were one of the pioneering groups to study them and create them. For more details on predictive neural networks, see here, and here, and here.
Limitations of current neural networks  — We have talked about before on the limitation of neural networks as they are today. Cannot predict, reason on content, and have temporal instabilities — we need a new kind of neural networks that you can about read here.
Neural Network Capsules are one approach to solve the limitation of current neural networks. We reviewed them here. We argue here that Capsules have to be extended with a few additional features:
  • operation on video frames: this is easy, as all we need to do is to make capsules routing look at multiple data-points in the recent past. This is equivalent to an associative memory on the most recent important data points. Notice these are not the most recent representations of recent frames, but rather they are the top most recent different representations. Different representations with different content can be obtained for example by saving only representations that differ more than a pre-defined value. This important detail allows to save relevant information on the most recent history only, and not a useless series of correlated data-points.
  • predictive neural network abilities: this is already part of the dynamic routing, which forces layers to predict the next layer representations. This is a very powerful self-learning technique that in our opinion beats all other kinds of unsupervised representation learning we have developed so far as a community. Capsules need now to be able to predict long-term spatiotemporal relationships, and this is not currently implemented.
Continuous learning  — this is important because neural networks need to continue to learn new data-points continuously for their life. Current neural networks are not able to learn new data without being re-trained from scratch at every instance. Neural networks need to be able to self-assess the need of new training and the fact that they do know something. This is also needed to perform in real-life and for reinforcement learning tasks, where we want to teach machines to do new tasks without forgetting older ones.
Transfer learning  — or how do we have these algorithms learn on their own by watching videos, just like we do when we want to learn how to cook something new? That is an ability that requires all the components we listed above, and also is important for reinforcement learning. Now you can really train your machine to do what you want by just giving an example, the same way we humans do every!
Reinforcement learning — this is the holy grail of deep neural network research: teach machines how to learn to act in an environment, the real world! This requires self-learning, continuous learning, predictive power, and a lot more we do not know. There is much work in the field of reinforcement learning, but to the author it is really only scratching the surface of the problem, still millions of miles away from it. We already talked about this here.
Reinforcement learning is often referred as the “cherry on the cake”, meaning that it is just minor training on top of a plastic synthetic brain. But how can we get a “generic” brain that then solve all problems easily? It is a chicken-in-the-egg problem! Today to solve reinforcement learning problems, one by one, we use standard neural networks:
  • a deep neural network that takes large data inputs, like video or audio and compress it into representations
  • a sequence-learning neural network, such as RNN, to learn tasks
Both these components are obvious solutions to the problem, and currently are clearly wrong, but that is what everyone uses because they are some of the available building blocks. As such results are unimpressive: yes we can learn to play video-games from scratch, and master fully-observable games like chess and go, but I do not need to tell you that is nothing compared to solving problems in a complex world. Imagine an AI that can play Horizon Zero Dawn better than humans… I want to see that!
But this is what we want. Machine that can operate like us.
Our proposal for reinforcement learning work is detailed here. It uses a predictive neural network that can operate continuously and an associative memory to store recent experiences.
No more recurrent neural networks —  recurrent neural network (RNN) have their days counted. RNN are particularly bad at parallelizing for training and also slow even on special custom machines, due to their very high memory bandwidth usage — as such they are memory-bandwidth-bound, rather than computation-bound, see here for more details. Attention based neural network are more efficient and faster to train and deploy, and they suffer much less from scalability in training and deployment. Attention in neural network has the potential to really revolutionize a lot of architectures, yet it has not been as recognized as it should. The combination of associative memories and attention is at the heart of the next wave of neural network advancements.
Attention has already showed to be able to learn sequences as well as RNNs and at up to 100x less computation! Who can ignore that?
We recognize that attention based neural network are going to slowly supplant speech recognition based on RNN, and also find their ways in reinforcement learning architecture and AI in general.
Localization of information in categorization neural networks — We have talked about how we can localize and detect key-points in images and video extensively here. This is practically a solved problem, that will be embedded in future neural network architectures.

Hardware

Hardware for deep learning is at the core of progress. Let us now forget that the rapid expansion of deep learning in 2008–2012 and in the recent years is mainly due to hardware:
  • cheap image sensors in every phone allowed to collect huge datasets — yes helped by social media, but only to a second extent
  • GPUs allowed to accelerate the training of deep neural networks
And we have talked about hardware extensively before. But we need to give you a recent update! Last 1–2 years saw a boom in the are of machine learning hardware, and in particular on the one targeting deep neural networks. We have significant experience here, and we are FWDNXT, the makers of SnowFlake: deep neural network accelerator.
There are several companies working in this space: NVIDIA (obviously), Intel, Nervana, Movidius, Bitmain, Cambricon, Cerebras, DeePhi, Google, Graphcore, Groq, Huawei, ARM, Wave Computing. All are developing custom high-performance micro-chips that will be able to train and run deep neural networks.
The key is to provide the lowest power and the highest measured performance while computing recent useful neural networks operations, not raw theoretical operations per seconds — as many claim to do.
But few people in the field understand how hardware can really change machine learning, neural networks and AI in general. And few understand what is important in micro-chips and how to develop them.
Here is our list:
  • training or inference? —  many companies are creating micro-chips that can provide training of neural networks. This is to gain a portion of the market of NVIDIA, which is the de-facto training hardware to date. But training is a small part of the story and the applications of deep neural networks. For every training step there are a million deployments in actual applications. For example one of the object detection neural network you can now use on the cloud today: it was trained once, and yes on a lot of images, but once trained it can be use by millions of computers on billions of data. What we are trying to say here: training hardware matter as little as the number of times you trained compared to the number of times you use. And making a chipset for training requires extra hardware and extra tricks. This translates into higher power for the same performance, and thus not the best possible for current deployments. Training hardware is important, and a easy modification of inference hardware, but it is not as important as many think.
  • Applications  — hardware that can provide training faster and at lower power is really important in the field, because it will allow to create and test new models and applications faster. But the real significant step forward will be in hardware for applications, mostly in inference. There are many applications today that are not possible or practical because hardware, and not software, is missing or inefficient. For example our phones can be speech-based assistants, and are currently sub-optimal because they cannot operate always-on. Even our home assistants are tied to the power supplies, and cannot follow us around the house unless we sprinkle multiple microphones or devices around. But maybe the largest application of all is removing the phone screen from our lives, and embedding it into our visual system. Without super-efficient hardware all this and many more applications (small robots) will not be possible.
  • winners and losers  — in hardware, the winner will be the ones that can operate at the lowest possible power per unit performance, and move into the market quickly. Imagine replacing SoC in cell-phones. Happens every year. Now imagine embedding neural network accelerators into memories. This may conquer much of the market faster and with significant penetration. That is what we call a winner.
About neuromorphic neural networks hardware, please see here.

Applications

We talked briefly about applications in the Goals section above, but we really need to go into details here. How is AI and neural network going to get into our daily life?
Here is our list:
  • categorizing images and videos  — already here in many cloud services. The next steps are doing the same in smart camera feeds — also here today from many providers. Neural nets hardware will allow to remove the cloud and process more and more data locally: a winner for privacy and saving Internet bandwidth.
  • speech-based assistants  — they are becoming a part of our lives, as they play music and control basic devices in our “smart” homes. But dialogue is such a basic human activity, we often give it for granted. Small devices you can talk to are a revolution that is happening right now. Speech-based assistants are getting better and better at serving us. But they are still tied to the power grid. The real assistant we want is moving with us. How about our cell-phone? Well again hardware wins here, because it will make that possible. Alexa and Cortana and Siri will be always on and always with you. Your phone will be your smart home — very soon. That is again another victory of the smart phone. But we also want it in our car and as we move around town. We need local processing of voice, and less and less cloud. More privacy and less bandwidth costs. Again hardware will give us all that in 1–2 years.
  • the real artificial assistants  — voice is great, but what we really want is something that can also see what we see. Analyze our environment as we move around. See an example here and ultimately here. This is the real AI assistant we can fall in love with. And neural network hardware will again grant your wish, as analyzing video feed is very computationally expensive, and currently at the theoretical limits on current silicon hardware. In other words a lot harder to do than speech-based assistants. But it is not impossible, and many smart startups like AiPoly already have all the software for it, but lack powerful hardware for running it on phones. Notice also that replacing the phone screen with a wearable glasses-like device will really make our assistant part of us!
What we want is Her from the movie Her!
  • the cooking robot — the next biggest appliances will be a cooking and cleaning robot. Here we may soon have the hardware, but we are clearly lacking the software. We need transfer learning, continuous learning and reinforcement learning. All working like a charm. Because you see: every recipe is different, every cooking ingredient looks different. We cannot hard-code all these options. We really need a synthetic entity that can learn and generalize well to do this. We are far from it, but not as far. Just a handful of years away at the current pace of progress. I sure will work on this, as I have done in the last few years~

Thursday, January 4, 2018

Top 3 Mobile Technology Trend, You Can’t Miss In 2018.


Before I kick-start this article, please allow me to wish
“ A Very Very Very… Happy New Year 2018” To all you lovely readers and my well wishers.
It has been an amazing journey so far being a part of this mobile app revolution since 2006, I feel blessed to see both pre & post smartphone evolution era and having experienced the change myself being the developer, leader and now a father of my own mobility startup. So thought to analyze the trend setters which kind of will rule this new year.
So here is my Top three technology trends you all should look out for in your endeavors in this new year 2018, which as always, will offer you loads of new opportunities to rock this world. Being a part of this mobile app ecosystem I feel immense pride while writing this piece of article for all you visionaries and future mobile apprenuer.

1. Augmented Reality/ Virtual Reality:

Wiki Defines AR as :

Augmented reality (AR) is a live direct or indirect view of a physical, real-world environment whose elements are “augmented” by computer-generated or extracted real-world sensory input such as sound, video, graphics, haptics or GPS data.

As per Wiki VR is :

Virtual reality (VR) is a computer technology that uses virtual reality headsets or multi-projected environments, sometimes in combination with physical environments or props, to generate realistic images, sounds and other sensations that simulate a user’s physical presence in a virtual or imaginary environment.
Mobile AR could become the primary driver of a $108 billion VR/AR market by 2021 (underperform $94 billion, outperform $122 billion) with AR taking the lion’s share of $83 billion and VR $25 billion.
In 2017 a lot has happened in this AR where Google & Apple invested heavily to harness the true potential of it. Apple has launched ARKit & Google has come up with ARCore, for developer to innovate and create some meaningful mobile solutions for the smartphone users.
source
As AR helps in adding a digital layer over virtual information to give a more realistic and unambiguous outlook. AR intertwined apps will gradually empower retail, life science, manufacturing, and many other domains through a wide range AR apps being developed to cater these sectors.

I Feel :

AR will take a huge leap forward to further revolutionize the ever progressing gaming industry and will stretch beyond it to empower the digital marketing world where gamification will be employed to attract & acquire new consumer for brands . All marketers need to adopt this tool to target their customers beyond conventional physical marketing. With most of the marketers seeing augmented reality as a way to provide a compelling user experience, we will soon be seeing a plethora of creative AR apps alluring consumers to buy their customized offerings
Virtual Reality technologies will be more focused on the game and events sphere as it is already doing so in 2017 and will go beyond to add more evolved app usage experience to offer an elevated dose of entertainment for the gaming user.

I find:

With iPhone X, Apple is trying to change the face of AR by making it a common use case for masses. Also A whole bunch of top tech players think this technology which is also called a mixed reality or immersive environments — is all set to create a truly digital-physical blended environment for the people who are majorly consuming digital world through their mobile power house

Some of The Popular AR/VR Companies(As reported by Fast Company):

  1. Google: is using VR to analyse your your living room
  2. Snapchat: Helping their app suer to control of their own augmented reality
  3. FACEBOOK: For gathering IRL friends in VR
  4. NVIDIA: For providing the power to process VR
& Many More …..
source: statista

2. Internet Of Things: A Connected World Of H/w & Software:

source
With Gartner predicting 26 bn connected devices by 2020 which ranges from LEDs, Toys, Sports equipment, medical equipment, to controllable power sockets.We will be privileged to witness the world where everything will connected with these small devices thereby bringing information right where you are standing. Also these information will be tapped right were it is being generated to empower the data centre using Edge Computing tech.
The smart objects will be interacting with our smartphone/tablets which will eventually function like our TV remort displaying and analyzing data, interfacing with social networks to monitor “things” that can tweet or post, paying for subscription services, ordering replacement consumables and updating object firmware.

Big Tech Gaints Are Already Bullish On IoT Connected World:

  • Microsoft is powering their popular IIS(Intelligent Systems Service) by integrating IoT capabilities to their enterprise service offerings.
  • Some of the known communication technology powering IoT concept is RFID, WIFI, EnOcean, RiotOS etc….
  • Google is working on two of its ambitious project called Nest & Brillo which is circled around usage of IoT to fuel your home automation needs. Brillo is an IoT OS which enables Wi-Fi, Bluetooth Low Energy, and other Android stuffs.
Established companies such as Microsoft, with its Intelligent Systems Service, and enterprise software vendors likes SAP, with its Internet of Things Solutions, are also adding Internet of Things capabilities to their offerings.
  • Amazon launched ‘Amazon Echo’ a amazing tech which works on your voice command to answer your queries, play songs and control smart devices within certain range.

I Feel:

IoT & IoT Based Apps:

Is here to stay and will be playing a very crucial rule in helping you navigate this world with more ease & comfort, making your commuting safe, your communication smart, your shopping productive, your learning more engaging and much more.. to make your living effective and efficient. In fact, IoT is slowly becoming part of every aspect of our lives. Not only will IoT apps augment our comfort, but they will also extend us more control to simplify routine work life and personal tasks.

Internet Of Things Evolution:

Most of IoT powered devices are already relying on mobile devices to syndicate data, especially in case of consumer IoT. With the surge in overall uses of Internet of Things , I feel more mobile apps will be developed for management of these smart devices.

3. Blockchain: Powering the World Of Cryptos:

As Per Investopedia:
A blockchain is a digitized, decentralized, public ledger of all cryptocurrency transactions. Constantly growing as ‘completed’ blocks (the most recent transactions) are recorded and added to it in chronological order, it allows market participants to keep track of digital currency transactions without central recordkeeping. Each node (a computer connected to the network) gets a copy of the blockchain, which is downloaded automatically.
To know more about blockchain, please refer
  1. Blockchain Technology Part 1 : What and Why ?
  2. Smart Contract A Blockchain Innovation for Non-Techies
As per recent study by IBM
9 in 10 government firms are planning to invest in blockchain for financial transaction management, asset management, contract management and regulatory compliance purposes.
Another research by Infosys says:
One-third of banks are expected to adopt commercial blockchain in 2018.
So it is quite clear that secured transactions based mobility solution will rule the fin-tech & other industry where security lies at the core. App developers will have a crucial role to play where they will be expected to develop more innovative app solutions to cater the need for secure & connected world. Your mobile phones are generating lots of confidential informations which needs to be secured from the third party breaches. So techies gear up and pull up your socks as, I feel Blockchain-based security mechanism are expected to be developed on mobile apps in the coming years and will needed in all kinds of industries ranging from fin-tech, eCommerce, Insurance tech etc….
Blockchain powered cryptos like bitcoin, ripple, Ethereum is already a rage in the technology & investment world. It has fascinated the imagination of many tech innovators leading them to adopt blockchain tech to develop wallets & currencies and most of them are being developed on mobile devices & computer systems, thereby offerings lots of opportunities for techies to adopt it as futuristic career options.
Using the blockchain tech entrepreneurs will be developing a solutions mostly over mobile to validate transactions securely, manage contracts smartly, store digital currencies(like bitcoins ,XRP etc), manage voting, secure hassle free shopping, powering banking transactions and many more innovative solutions which will be targeted towards making consumers life more resourceful and productive eventually.

Blockchain Use Case By R3:

There are many more trends which will be disrupting the mobility world like
  • Artificial Intelligence : Where Machine learning , Deep Learning all will play a crucial role in fueling intelligence to the machines to help them make smart decisions without human interventions. Mobile chatbots is one of the prime example of one such use case of AI. Apps like Siri, Google Now are already harnessing AI technology and will be inspiring many more voice based and Images based AI innovations to be made by mobile appreneurs. Mobile data will be tapped giving it more intelligent forms by app developers to make our life smarter with time.
  • Mobile computing/Cloud computing :Based mobility solutions will be in high demand specially for big enterprises where business decisions are made based on intelligent data analytics . All these will be stored over the cloud and mobile will play a major role in harnessing the power of those data to serve consumer in real time.
Some of My Other Relevant Tech Article Which Can be Useful:
  1. All About Edge Computing- How It Is Changing The Present Past & Future Of IoT?
  2. Top 3 Technology Trends For 2018, Which Will Be A Game Changer !
  3. All You Wanted To Know About BitCoin?
  4. NLP Fundamentals: Where Humans Team Up With Machines To Help It Speak
Summary:
Having seen the world of mobility, changing from feature phone to a smartphone era I feel amazed how it has transformed the life of humans. Now we can communicate in split seconds, transact in no time, buy what we need with one touch, get entertained when & where we want, shower our love to our closed ones without being physically present and do many more things which one can imagine just over this tiny powerful device.
So as a developer and as a tech visionary you have, the greater responsibility to make sure that you are creating tools which complements user needs and impacts them deeply. It’s your duty to entertain them, educate them, and to make them feel safe & secure on the go.
Ending by, extending my sincere gratitude to all you awesome readers for showering all your love & constantly inspiring me to write more & learn more eventually.


Interested for our works and services?
Get more of our update !