Programming & IT Tricks . Theme images by MichaelJay. Powered by Blogger.

Copyright

Facebook

Post Top Ad

Search This Blog

Post Top Ad

Responsive Ads Here

Archive

Post Top Ad

Contact


Editors Picks

Follow us

Post Top Ad

Fashion

Music

News

Sports

Food

Technology

Featured

Videos

Fashion

Technology

Fashion

Label

Translate

About

Translate

Sponsor

test

Weekly

Comments

Recent

Connect With us

Over 600,000+ Readers Get fresh content from FastBlog

About

Showing posts with label Startup. Show all posts
Showing posts with label Startup. Show all posts

Saturday, January 6, 2018

Artificial Intelligence, AI in 2018 and beyond


Or how machine learning is evolving into AI
These are my opinions on where deep neural network and machine learning is headed in the larger field of artificial intelligence, and how we can get more and more sophisticated machines that can help us in our daily routines.
Please note that these are not predictions of forecasts, but more a detailed analysis of the trajectory of the fields, the trends and the technical needs we have to achieve useful artificial intelligence.
Not all machine learning is targeting artificial intelligences, and there are low-hanging fruits, which we will examine here also.

Goals

The goal of the field is to achieve human and super-human abilities in machines that can help us in every-day lives. Autonomous vehicles, smart homes, artificial assistants, security cameras are a first target. Home cooking and cleaning robots are a second target, together with surveillance drones and robots. Another one is assistants on mobile devices or always-on assistants. Another is full-time companion assistants that can hear and see what we experience in our life. One ultimate goal is a fully autonomous synthetic entity that can behave at or beyond human level performance in everyday tasks.
See more about these goals here, and here, and here.

Software

Software is defined here as neural networks architectures trained with an optimization algorithm to solve a specific task.
Today neural networks are the de-facto tool for learning to solve tasks that involve learning supervised to categorize from a large dataset.
But this is not artificial intelligence, which requires acting in the real world often learning without supervision and from experiences never seen before, often combining previous knowledge in disparate circumstances to solve the current challenge.
How do we get from the current neural networks to AI?
Neural network architectures  — when the field boomed, a few years back, we often said it had the advantage to learn the parameters of an algorithms automatically from data, and as such was superior to hand-crafted features. But we conveniently forgot to mention one little detail… the neural network architecture that is at the foundation of training to solve a specific task is not learned from data! In fact it is still designed by hand. Hand-crafted from experience, and it is currently one of the major limitations of the field. There is research in this direction: here and here (for example), but much more is needed. Neural network architectures are the fundamental core of learning algorithms. Even if our learning algorithms are capable of mastering a new task, if the neural network is not correct, they will not be able to. The problem on learning neural network architecture from data is that it currently takes too long to experiment with multiple architectures on a large dataset. One has to try training multiple architectures from scratch and see which one works best. Well this is exactly the time-consuming trial-and-error procedure we are using today! We ought to overcome this limitation and put more brain-power on this very important issue.
Unsupervised learning —we cannot always be there for our neural networks, guiding them at every stop of their lives and every experience. We cannot afford to correct them at every instance, and provide feedback on their performance. We have our lives to live! But that is exactly what we do today with supervised neural networks: we offer help at every instance to make them perform correctly. Instead humans learn from just a handful of examples, and can self-correct and learn more complex data in a continuous fashion. We have talked about unsupervised learning extensively here.
Predictive neural networks —  A major limitation of current neural networks is that they do not possess one of the most important features of human brains: their predictive power. One major theory about how the human brain work is by constantly making predictions: predictive coding. If you think about it, we experience it every day. As you lift an object that you thought was light but turned out heavy. It surprises you, because as you approached to pick it up, you have predicted how it was going to affect you and your body, or your environment in overall.
Prediction allows not only to understand the world, but also to know when we do not, and when we should learn. In fact we save information about things we do not know and surprise us, so next time they will not! And cognitive abilities are clearly linked to our attention mechanism in the brain: our innate ability to forego of 99.9% of our sensory inputs, only to focus on the very important data for our survival — where is the threat and where do we run to to avoid it. Or, in the modern world, where is my cell-phone as we walk out the door in a rush.
Building predictive neural networks is at the core of interacting with the real world, and acting in a complex environment. As such this is the core network for any work in reinforcement learning. See more below.
We have talked extensively about the topic of predictive neural networks, and were one of the pioneering groups to study them and create them. For more details on predictive neural networks, see here, and here, and here.
Limitations of current neural networks  — We have talked about before on the limitation of neural networks as they are today. Cannot predict, reason on content, and have temporal instabilities — we need a new kind of neural networks that you can about read here.
Neural Network Capsules are one approach to solve the limitation of current neural networks. We reviewed them here. We argue here that Capsules have to be extended with a few additional features:
  • operation on video frames: this is easy, as all we need to do is to make capsules routing look at multiple data-points in the recent past. This is equivalent to an associative memory on the most recent important data points. Notice these are not the most recent representations of recent frames, but rather they are the top most recent different representations. Different representations with different content can be obtained for example by saving only representations that differ more than a pre-defined value. This important detail allows to save relevant information on the most recent history only, and not a useless series of correlated data-points.
  • predictive neural network abilities: this is already part of the dynamic routing, which forces layers to predict the next layer representations. This is a very powerful self-learning technique that in our opinion beats all other kinds of unsupervised representation learning we have developed so far as a community. Capsules need now to be able to predict long-term spatiotemporal relationships, and this is not currently implemented.
Continuous learning  — this is important because neural networks need to continue to learn new data-points continuously for their life. Current neural networks are not able to learn new data without being re-trained from scratch at every instance. Neural networks need to be able to self-assess the need of new training and the fact that they do know something. This is also needed to perform in real-life and for reinforcement learning tasks, where we want to teach machines to do new tasks without forgetting older ones.
Transfer learning  — or how do we have these algorithms learn on their own by watching videos, just like we do when we want to learn how to cook something new? That is an ability that requires all the components we listed above, and also is important for reinforcement learning. Now you can really train your machine to do what you want by just giving an example, the same way we humans do every!
Reinforcement learning — this is the holy grail of deep neural network research: teach machines how to learn to act in an environment, the real world! This requires self-learning, continuous learning, predictive power, and a lot more we do not know. There is much work in the field of reinforcement learning, but to the author it is really only scratching the surface of the problem, still millions of miles away from it. We already talked about this here.
Reinforcement learning is often referred as the “cherry on the cake”, meaning that it is just minor training on top of a plastic synthetic brain. But how can we get a “generic” brain that then solve all problems easily? It is a chicken-in-the-egg problem! Today to solve reinforcement learning problems, one by one, we use standard neural networks:
  • a deep neural network that takes large data inputs, like video or audio and compress it into representations
  • a sequence-learning neural network, such as RNN, to learn tasks
Both these components are obvious solutions to the problem, and currently are clearly wrong, but that is what everyone uses because they are some of the available building blocks. As such results are unimpressive: yes we can learn to play video-games from scratch, and master fully-observable games like chess and go, but I do not need to tell you that is nothing compared to solving problems in a complex world. Imagine an AI that can play Horizon Zero Dawn better than humans… I want to see that!
But this is what we want. Machine that can operate like us.
Our proposal for reinforcement learning work is detailed here. It uses a predictive neural network that can operate continuously and an associative memory to store recent experiences.
No more recurrent neural networks —  recurrent neural network (RNN) have their days counted. RNN are particularly bad at parallelizing for training and also slow even on special custom machines, due to their very high memory bandwidth usage — as such they are memory-bandwidth-bound, rather than computation-bound, see here for more details. Attention based neural network are more efficient and faster to train and deploy, and they suffer much less from scalability in training and deployment. Attention in neural network has the potential to really revolutionize a lot of architectures, yet it has not been as recognized as it should. The combination of associative memories and attention is at the heart of the next wave of neural network advancements.
Attention has already showed to be able to learn sequences as well as RNNs and at up to 100x less computation! Who can ignore that?
We recognize that attention based neural network are going to slowly supplant speech recognition based on RNN, and also find their ways in reinforcement learning architecture and AI in general.
Localization of information in categorization neural networks — We have talked about how we can localize and detect key-points in images and video extensively here. This is practically a solved problem, that will be embedded in future neural network architectures.

Hardware

Hardware for deep learning is at the core of progress. Let us now forget that the rapid expansion of deep learning in 2008–2012 and in the recent years is mainly due to hardware:
  • cheap image sensors in every phone allowed to collect huge datasets — yes helped by social media, but only to a second extent
  • GPUs allowed to accelerate the training of deep neural networks
And we have talked about hardware extensively before. But we need to give you a recent update! Last 1–2 years saw a boom in the are of machine learning hardware, and in particular on the one targeting deep neural networks. We have significant experience here, and we are FWDNXT, the makers of SnowFlake: deep neural network accelerator.
There are several companies working in this space: NVIDIA (obviously), Intel, Nervana, Movidius, Bitmain, Cambricon, Cerebras, DeePhi, Google, Graphcore, Groq, Huawei, ARM, Wave Computing. All are developing custom high-performance micro-chips that will be able to train and run deep neural networks.
The key is to provide the lowest power and the highest measured performance while computing recent useful neural networks operations, not raw theoretical operations per seconds — as many claim to do.
But few people in the field understand how hardware can really change machine learning, neural networks and AI in general. And few understand what is important in micro-chips and how to develop them.
Here is our list:
  • training or inference? —  many companies are creating micro-chips that can provide training of neural networks. This is to gain a portion of the market of NVIDIA, which is the de-facto training hardware to date. But training is a small part of the story and the applications of deep neural networks. For every training step there are a million deployments in actual applications. For example one of the object detection neural network you can now use on the cloud today: it was trained once, and yes on a lot of images, but once trained it can be use by millions of computers on billions of data. What we are trying to say here: training hardware matter as little as the number of times you trained compared to the number of times you use. And making a chipset for training requires extra hardware and extra tricks. This translates into higher power for the same performance, and thus not the best possible for current deployments. Training hardware is important, and a easy modification of inference hardware, but it is not as important as many think.
  • Applications  — hardware that can provide training faster and at lower power is really important in the field, because it will allow to create and test new models and applications faster. But the real significant step forward will be in hardware for applications, mostly in inference. There are many applications today that are not possible or practical because hardware, and not software, is missing or inefficient. For example our phones can be speech-based assistants, and are currently sub-optimal because they cannot operate always-on. Even our home assistants are tied to the power supplies, and cannot follow us around the house unless we sprinkle multiple microphones or devices around. But maybe the largest application of all is removing the phone screen from our lives, and embedding it into our visual system. Without super-efficient hardware all this and many more applications (small robots) will not be possible.
  • winners and losers  — in hardware, the winner will be the ones that can operate at the lowest possible power per unit performance, and move into the market quickly. Imagine replacing SoC in cell-phones. Happens every year. Now imagine embedding neural network accelerators into memories. This may conquer much of the market faster and with significant penetration. That is what we call a winner.
About neuromorphic neural networks hardware, please see here.

Applications

We talked briefly about applications in the Goals section above, but we really need to go into details here. How is AI and neural network going to get into our daily life?
Here is our list:
  • categorizing images and videos  — already here in many cloud services. The next steps are doing the same in smart camera feeds — also here today from many providers. Neural nets hardware will allow to remove the cloud and process more and more data locally: a winner for privacy and saving Internet bandwidth.
  • speech-based assistants  — they are becoming a part of our lives, as they play music and control basic devices in our “smart” homes. But dialogue is such a basic human activity, we often give it for granted. Small devices you can talk to are a revolution that is happening right now. Speech-based assistants are getting better and better at serving us. But they are still tied to the power grid. The real assistant we want is moving with us. How about our cell-phone? Well again hardware wins here, because it will make that possible. Alexa and Cortana and Siri will be always on and always with you. Your phone will be your smart home — very soon. That is again another victory of the smart phone. But we also want it in our car and as we move around town. We need local processing of voice, and less and less cloud. More privacy and less bandwidth costs. Again hardware will give us all that in 1–2 years.
  • the real artificial assistants  — voice is great, but what we really want is something that can also see what we see. Analyze our environment as we move around. See an example here and ultimately here. This is the real AI assistant we can fall in love with. And neural network hardware will again grant your wish, as analyzing video feed is very computationally expensive, and currently at the theoretical limits on current silicon hardware. In other words a lot harder to do than speech-based assistants. But it is not impossible, and many smart startups like AiPoly already have all the software for it, but lack powerful hardware for running it on phones. Notice also that replacing the phone screen with a wearable glasses-like device will really make our assistant part of us!
What we want is Her from the movie Her!
  • the cooking robot — the next biggest appliances will be a cooking and cleaning robot. Here we may soon have the hardware, but we are clearly lacking the software. We need transfer learning, continuous learning and reinforcement learning. All working like a charm. Because you see: every recipe is different, every cooking ingredient looks different. We cannot hard-code all these options. We really need a synthetic entity that can learn and generalize well to do this. We are far from it, but not as far. Just a handful of years away at the current pace of progress. I sure will work on this, as I have done in the last few years~

Friday, January 5, 2018

How we recreated Amazon Go in 36 hours


John Choi, me, our project apparatus, Ruslan Nikolaev, and Soheil Hamidi at our demo!
My colleagues and I wanted to create something that would make people go “wow” at our latest hackathon.
Because imitation is the sincerest form of flattery and IoT is incredibly fun to work with, we decided to create our own version of Amazon Go.
Before I explain what it took to make this, here’s the 3 minute demo of what we built!
There were four of us. Ruslan, a great full-stack developer who had experience working with Python. John, an amazing iOS developer. Soheil, another great full-stack developer who had experience with Raspberry Pi. And finally, there was me, on the tail end of an Android developer internship.
I quickly realized that there were a lot of moving parts to this project. Amazon Go works on the basis of real-time proximity sensors in conjunction with a real-time database of customers and their carts.
We also wanted to take things a step further and make the entry/exit experience seamless. We wanted to let people enter and exit the store without needing to tap their phones.
In order to engage users as a consumer-facing product, our app would need a well-crafted user interface, like the real Amazon Go.
On the day before the hackathon, I put together a pseudo-design doc outlining what we needed to do within the 36 hour deadline. I incorporated the strengths of our team and the equipment at hand. The full hastily assembled design doc can be seen below.

There were six main components to EZShop, our version of Amazon Go.
A quick diagram I whipped up visualizing the components of this project

The Kairos Facial Recognition API

The Kairos facial recognition API was a fundamental component for us. It abstracted the ability to identify and store unique faces. It had two APIs that we used: /enroll and /verify.
/enroll is described as:
Takes a photo, finds the faces within it, and stores the faces into a gallery you create.
We enrolled all new customers into a single “EZShop” gallery. A unique face_id attribute would be returned and stored with the customer’s registered name in our real-time database.
When we wanted to verify a potential customer’s image, we would POST it to the /verify endpoint. This would return the face_id with the highest probability of a match.
In a real-world implementation, it probably would have been a better idea to use a natively implemented facial recognition pipeline with TensorFlow instead of a network API. But given our time constraints, the API served us very well.

The Realtime Firebase Database

The Firebase database was another fundamental piece to our puzzle. Every other component interacted with it in real time. Firebase allows customized change listeners to be created upon any data within the database. That feature, coupled with the easy set-up process, made it a no brainer to use.
The schema was incredibly simple. The database stored an array of items and an array of users. The following is an example JSON skeleton of our database:
{
  "items": [
    {
      "item_id": 1,
      "item_name": "Soylent",
      "item_stock": 1,
      "price": 10
    }
  ],
  "users": [
    {
      "face_id": 1,
      "name": "Subhan Nadeem",
      "in_store": false,
      "cart": [
        1
      ]
    }
  ]
}
New users would be added to the array of users in our database after registering with the Kairos API. Upon entry or exit, the customer’s boolean in_store attribute would be updated, which would be reflected in the manager and personal app UIs.
Customers picking up an item would result in an updated item stock. Upon recognizing which customer picked up what item, the item’s ID would be added to the customer’s items_picked_up array.
I had planned for a cloud-hosted Node/Flask server that would route all activity from one device to another, but the team decided that it was much more efficient (although more hacky) for everybody to work directly upon the Firebase database.

The Manager and Personal Customer Apps

John, being the iOS wizard that he is, finished these applications in the first 12 hours of the hackathon! He really excelled at designing user-friendly and accessible apps.

The Manager App


This iPad application registered new customers into our Kairos API and Firebase database. It also displayed all customers in the store and the inventory of store items. The ability to interact directly with the Firebase database and observe changes made to it (e.g. when a customer’s in_store attribute changes from true to false) made this a relatively painless process. The app was a great customer-facing addition to our demo.

The Personal Shopping App


Once the customer was registered, we would hand a phone with this app installed to the customer. They would log in with their face (Kairos would recognize and authenticate). Any updates to their cart would be shown on the phone instantly. Upon exiting the store, the customer would also receive a push notification on this phone stating the total amount they spent.

The Item Rack, Sensors, and Camera

Soheil and Ruslan worked tirelessly for hours to perfect the design of the item shelf apparatus and the underlying Pi Python scripts.
The item rack apparatus. Three items positioned in rows, a tower for the security camera, and ultrasonic sensors positioned at the rear
There were three items positioned in rows. At the end of two rows, an ultrasonic proximity sensor was attached. We only had two ultrasonic sensors, so the third row had a light sensor under the items, which did not work as seamlessly. The ultrasonic sensors were connected to the Raspberry Pi that processed the readings of the distance from the next closest object via simple Python scripts (either the closest item or the end of the rack). The light sensor detected a “dark” or “light” state (dark if the item was on top of it, light otherwise).
When an item was lifted, the sensor’s reading would change and trigger an update to the item’s stock in the database. The camera (Android phone) positioned at the top of the tower would detect this change and attempt to recognize the customer picking up the item. The item would then instantly be added to that customer’s cart.

Entrance and Exit Cameras

I opted to use Android phones as our facial recognition cameras, due to my relative expertise with Android and the easy coupling phones provide when taking images and processing them.
The phones were rigged on both sides of a camera tripod, one side at the store’s entrance, and the other at the store exit.
A camera tripod, two phones, and lots of tape
Google has an incredibly useful Face API that implements a native pipeline for detecting human faces and other related useful attributes. I used this API to handle the heavy lifting for facial recognition.
In particular, the API provided an approximate distance of a detected face from the camera. Once a customer’s face was within a close distance, I would take a snapshot of the customer, verify it against the Kairos API to ensure the customer existed in our database, and then update the Firebase database with the customer’s in-store status.
I also added a personalized text-to-speech greeting upon recognizing the customer. That really ended up wowing everybody who used it.
The result of this implementation can be seen here:
Once the customer left the store, the exit-detection state of the Android application was responsible for retrieving the items the customer picked up from the database, calculating the total amount the customer spent, and then sending a push notification to the customer’s personal app via Firebase Cloud Messaging.

Of the 36 hours, we slept for about 6. We spent our entire time confined to a classroom in the middle of downtown Toronto. There were countless frustrating bugs and implementation roadblocks we had to overcome. There were some bugs in our demo that you probably noticed, such as the cameras failing to recognize several people in the same shot.
We would have also liked to implement additional features, such as detecting customers putting items back on the rack and adding a wider variety of items.
Our project ended up winning first place at the hackathon. We set up an interactive booth for an hour (the Chipotle box castle that can be seen in the title picture) and had over a hundred people walk through our shop. People would sign up with a picture, log into the shopping app, walk into the store, pick up an item, walk out, and get notified of their bill instantly. No cashiers, no lines, no receipts, and a very enjoyable user experience.
Walking a customer through our shop
I was proud of the way our team played to each individual’s strengths and created a well put-together full-stack IoT project in the span of a few hours. It was an incredibly rewarding feeling for everybody, and it’s something I hope to replicate in my career in the future.
I hope this gave you some insight into what goes on behind the scenes of a large, rapidly prototyped, and hacky hackathon project such as EZShop.

How Uber was made


Uber has transformed the world. Indeed, its inconceivable to think of a world without the convenience of the innovative ride sharing service. Tracing its origins in a market which is constantly being deregulated, Uber has emerged triumphant. Operating in over 58 countries and valued roughly at US$ 66 billion, Uber has rapidly expanded to established branches in over 581 cities in over 82 countries with the United States, Brazil, China, Mexico and India being Uber’s most active countries.
If that wasn’t impressive enough, in 2016 the company completed a total of 2 billion rides in one week. When you consider the fact that the first billion rides took Uber 6 years, and the second billion was garnered in a mere 6 months, it’s not surprising to see Uber emerge as a global business leader. This worldwide phenomenon is built on a simple idea, seductive in its premise - the ability to hail a car with nothing but your smartphone.
It took the problem of hailing a taxi and gave everyone an equitable solution while further capitalizing on the emerging market. And smart people are asking the right question: How do I build an app like Uber for my business needs?

Humble Beginnings

It all started in 2008, with the founders of Uber discussing the future of tech at a conference. By 2010, Uber officially launched in San Francisco. In 6 months, they had 6,000 users and provided roughly 20,000 rides. What was the key to their success? For one, Uber’s founders focused on attracting both drivers and riders simultaneously. San Francisco was the heart of the tech community in the US and was thus the perfect sounding board for this form of technological innovation to thrive.
In the beginning, Uber spread their App through word of mouth, hosting and sponsoring tech events, and giving participants of their events free rides with their app. This form of go-to-marketing persists today - giving 50% discounts to new riders for their first Uber ride. This initial discount incentivized users to become long term riders, and the rest was history. As more and more people took to social media to tell the world about this innovative new App - the sheer brilliance of their marketing strategy paid off.

Product Technology Cohesion: How Uber Works

What makes Uber, Uber? For one, it’s the ubiquitous appeal, or the way in which they streamlined their product, software and technology. It was, at the start, fresh, innovative, and had never been seen before. So if one were to replicate the model, they’d need to look at Uber’s branding strategy.
To use Uber, you have to download the app, which launched first on iPhone, then extended to Android and Blackberry.
Uber’s co-founders, Garret Camp and Travis Kalanick, relied heavily on 6 key technologies based on iOS and Android geolocation. What really sold it though, was its clear core value - the ability to map and track all available taxis in your given area. All other interactions are based on this core value - and its what sets Uber (and will set your app) apart from the crowd. To build an App like Uber, you’ll need to have:
1. Registering/Log-in features: Uber allows you to register with your first name, last name, phone number and preferred language. Once you’ve signed up, they’ll send you an SMS to verify your number, which will then allow you to set your payment preferences. Trip fares are charged after every ride through this cashless system.
2. Booking features: This allows drivers the option to accept or deny incoming ride requests and get information on the current location and destination of the customer.
3. The ability to Identify a Device’s location: Uber, via CoreLocation framework (for iOS platforms) obtains the geographic location and orientation of a device to schedule location and delivery. Understanding iOS and Android geolocation features is crucial for this step, because that’s what your App is running on.
4. Point to Point Directions: The Uber App provides directions to both the driver and the user. Developers of the Uber App use MapKit for iOS and Google Maps Android API for Android to calculate the route and make directions available. They further implemented Google Maps for iPhone and Android, but cleverly adapted technology from other mapping companies to solve any logistical issues that might come up.
5. Push Notifications and SMS: You get up to 3 notifications instantly from Uber when you book a ride.
  • A notification telling you when the driver accepts your request
  • One when the driver is close to your location
  • One in the off chance your ride has been cancelled
You further get the full update on your driver’s status, down to the vehicle make and license number, and an ETA on the taxi’s time of arrival.
6. Price Calculator: Uber offers a cashless payment system, paying drivers automatically after every ride, processed through the user’s credit card. Uber takes 25% of the driver’s fare, making for easy profit. They paired with Braintree, a world leader in the mobile payment industry, but other good options avaible are Stripe, or Paypal, via Card.io.
Here are few more much sought after features for the user’s side of the App:
  • The ability to see the driver’s profile and status: Your customers will feel safer being able to see your driver’s verification, and it’s makes good security sense to ensure you know who’s using your App for profit.
  • The ability to receive alerts: Receive immediate notifications about the status of your ride and any cancellations.
  • The ability to see the route from Their Phones (An In built Navigation system): This is intrinsically linked to your geolocation features, you want to be able to direct your taxis to the quickest, most available routes.
  • Price calculation: Calculating a price on demand and implementing a cashless payment system.
  • A “spilt fare” option: Uber introduced this option wit great success. It allows friends to spilt the price of the ride.
  • Requesting previous drivers: It’s a little like having your favourite taxi man on speed dial, and is a good way of ensuring repeat customers.
  • Waitlist instead of surge pricing: Avoid the media hassle of employing surge pricing by employing a wait list feature, so your users can be added to a waiting list rather than be charged more than they should, and to keep them from refreshing the App during peak hours, reducing the resources required by your backend infrastructure.
Another key to Uber’s success, that should be noted by potential developers of similar Apps, is the way in which Uber operates. They tap into more than one market which equates to more riders, more drivers, and more business for the company. Uber has mastered the art of localization - the ability to beat out pre-existing markets and competitors, which further retains their customer base by improving their own business strategy.
They’ve taken local context and circumstances into consideration. For example, they partnered with Paypal in November 2013 to provide as many people in Germany don’t use credit cards, and switched to services based on SMS messages in Asia as there are more people but fewer smart phones per capita. This helps them cater to various markets and and optimize profits.
The Uber marketing strategy isn’t static - it’s dynamic. Expansion was necessary, and the business model reaps profits from saturating the taxi market with their customers and drivers, driving their exponential growth. What aspiring App developers can take from this is that you need to design your App for flexibility.
Design your App in a way that’s going to let it take a hit and roll with punches. Having a system in place that allows you to build and integrate changes effectively within the App and allows team members to communicate effectively is of paramount importance.
What made Uber so successful was its ability to reshape how we think about technology and its operation. Indeed it made the market a better, more efficient place through the innovative on-demand service.

What Technology is Uber Built on?

The tech side of the App is written largely in JavaScript which is also used to calculate supply and predict demand. With the real time dispatch systems being built on Node.js and Redis. Java, as well as Objective-C is used for the iPhone and Android apps. Twilio is the force behind Uber’s text messages, and push notifications are implemented through Apple Push Notifications Service on the iOS platform and Google Cloud Messaging (GCM) for the Android App.

How much does Uber make?

Actually, it’s a lot less than you think. The $66 billion valuation, after the 25% commission (which rounds out to about $0.19 per ride) mostly goes towards credit card processing, interest, tax, compensation for employees, customer support, marketing, and various anti-fraud efforts.

How much does it take to build Uber?

Uber’s not just one App, it’s two - one for the rider and one for the driver. The cost of developing an App like Uber is dependent on a number of factors
  • the cost of building an MVP
  • product development and acquisition
  • getting the economics of marketing sorted
  • the constant cost of building on and improving your App’s analytic capabilities
When you make an App like Uber, you’ll invest a fair bit into design services, backend and web development, project management, not to mention Android and iOS native app development. The total man hours round out to around 5000 hours for similar on demand taxi Apps, which puts the cost of developing such an App to around $50,000 (assuming that your team works for $50 dollars an hour). However, since hourly rates roughly range from $20 to $150, median costs could be higher or lower.

Conclusion

To wrap up, Ubers success was due to several factors, including a clear business model and interaction based features, and not the other way around combined with a marketing strategy focusing on attracting users.
The question on everyone’s mind of course is how can you reduce the overall risk of failure by making sure that your idea and product are viable when you’re developing an App?
One way is to use a Mobile App development partner (such as Octodev) that has worked on many such Apps and understands the processes involved. An advance of using such a partner is they’ve worked on many such App development projects and have the practical experience in product development to avoid the pitfalls and make the most of your vision.
Octodev App Development Process
Another important part of ensuring that your App development project is swiftly and smoothly executed is having a clear road map and regular communication during the project. There are many approaches to achieve this and we, at Octodev, use a consultative approach to App development. We draw from our successful App implementations. Get in touch with us now if you want an accurate cost for your own Uber like App idea.
This article was originally published on the Octodev Blog.

Interested for our works and services?
Get more of our update !