Programming & IT Tricks . Theme images by MichaelJay. Powered by Blogger.

Copyright

Facebook

Post Top Ad

Search This Blog

Post Top Ad

Responsive Ads Here

Archive

Post Top Ad

Contact


Editors Picks

Follow us

Post Top Ad

Fashion

Music

News

Sports

Food

Technology

Featured

Videos

Fashion

Technology

Fashion

Label

Translate

About

Translate

Sponsor

test

Weekly

Comments

Recent

Connect With us

Over 600,000+ Readers Get fresh content from FastBlog

About

Showing posts with label Google Street. Show all posts
Showing posts with label Google Street. Show all posts

Thursday, January 4, 2018

I Was Supposed to be an Architect


I’m leading a VR development studio, but the truth is I’ve been navigating a series of epic career learning curves that have taken me far outside of my comfort zone, and I wouldn’t have it any other way.
Mainstreet, Mall or Modem
On my quest to start sharing more about our process and lessons learned on the virtual frontier, I thought I’d start with a bit of background on how I arrived here in the first place.
I studied and practiced architecture, but I’ve been fascinated with virtual technologies as far back as I can remember. In fact, my architectural thesis project in grad school (image above) focused on how VR and digital technologies would someday revolutionize architecture — specifically retail architecture. This was 17 years ago, when VR was very expensive, and largely inaccessible, but the brilliant pioneers at work innovating in this field were demonstrating the massive potential. It was only a matter of time before VR would find a way to mainstream.
Like so many other physical manifestations, from music to books and beyond, I believe buildings are subject to a similar digital transcendence. It’s already happening in a pretty big way, and this is just the beginning of a major architectural transformation that might take another decade or two to fully surface, but I digress… I’m saving this interest for a future pivot, and almost certainly another epic learning curve to go with it.
I tried using Everquest to visualize architecture.
I had a level 47 Dark Elf Shadow Knight in Everquest, but spent most of my time wandering around, exploring the environments. What I really wanted to do was import my own architectural models to explore them inside the game.
If they could have such elaborate dungeons and forts to explore in Everquest, with people from all around the world working together in the game virtually, why couldn’t the same technology also be used to visualize a new construction project, with the architect, building owner, and construction team exploring or collaborating on the design together?
This quest to visualize architecture in a real-time world became a ‘first principle’ in my career path that I’ve been chasing ever since.
I met my amazing and tremendously patient wife, Kandy, in grad school, and after studying architecture together in Europe and graduating, we practiced architecture for some time before starting our own firm, Crescendo Design, focused on eco-friendly, sustainable design principles.
Then one day in 2006, I read an article in Wired about Second Life — a massively multi-player world where users could create their own content. Within an hour, I was creating a virtual replica of a design we had on the boards at the time. I had to use the in-world ‘prims’ to build it, but I managed.
I was working in a public sandbox at the time, and when I had the design mostly finished, I invited the client in to explore it. They had 2 young kids, who were getting a huge kick out of this watching over their parent’s shoulders as they walked through what could soon be their new home.
The Naked Lady, the Sheriff Bunny, and Epic Learning Curve #1.
We walked in the front door, when suddenly a naked woman showed up and started blocking the doorways. I reported her to the ‘Linden’ management, and a little white bunny with a big gold sheriff’s badge showed up and kicked her out. “Anything else I can help with?” Poof.. the bunny vanished and we continued our tour. That’s when I realized I needed my own virtual island (and what an odd place Second Life was).
But then something amazing happened that literally changed my career path, again.
I left one of my houses in that public sandbox overnight. When I woke up in the morning and logged in, someone had duplicated the house to create an entire neighborhood — and they were still there working on it.
Architectural Collaboration on Virtual Steroids
I walked my avatar, Keystone Bouchard, into one of the houses and found a group of people speaking a foreign language (I think it was Dutch?) designing the kitchen. They had the entire house decorated beautifully.
One of the other houses had been modified by a guy from Germany who thought the house needed a bigger living room. He was still working on it when I arrived, and while he wasn’t trained in architecture, he talked very intelligently about his design thinking and how he resolved the new roof lines.
I was completely blown away. This was architectural collaboration on virtual steroids, and opened the door to another of the ‘first principle’ vision quests I’m still chasing. Multi-player architectural collaboration in a real-time virtual world is powerful stuff.
Steve Nelson, Jon Brouchoud, and Carl Bass delivering Keynote at Autodesk University 2006
One day Steve Nelson’s avatar, Kiwini Oe, visited my Architecture Island in Second Life and offered me a dream job designing virtual content at his agency, Clear Ink, in Berkeley, California. Kandy and I decided to relocate there from Wisconsin, where I enjoyed the opportunity to build virtual projects for Autodesk, the U.S. House of Representatives, Sun Microsystems and lots of other virtual installations. I consider that time to be one of the most exciting in my career, and it opened my eyes to the potential for enterprise applications for virtual worlds.
Wikitecture
I started holding architectural collaboration experiments on Architecture Island. We called it ‘Wikitecture.’ My good friend, Ryan Schultz, from architecture school suggested we organize the design process into a branching ‘tree’ to help us collaborate more effectively.
Studio Wikitecture was born, and we went on to develop the ‘Wiki Tree’ and one of our projects won the Founder’s Award and third place overall from over 500 entries worldwide in an international architecture competition to design a health clinic in Nyany, Nepal.
These were exciting times, but we were constantly faced with the challenge that we weren’t Second Life’s target audience. This was a consumer-oriented platform, and Linden Lab was resolutely and justifiably focused on growing their virtual land sales and in-world economy, not building niche-market tools to help architects collaborate. I don’t blame them — more than 10 years after it launched, it still has a larger in-world economy of transactions of real money than some small countries.
We witnessed something truly extraordinary there — something I haven’t seen or felt since. Suffice it to say, almost everything I’ve done in the years since have been toward my ultimate goal of someday, some way, somehow, instigating the conditions that gave rise to such incredible possibilities. We were onto something big.

Top 3 Mobile Technology Trend, You Can’t Miss In 2018.


Before I kick-start this article, please allow me to wish
“ A Very Very Very… Happy New Year 2018” To all you lovely readers and my well wishers.
It has been an amazing journey so far being a part of this mobile app revolution since 2006, I feel blessed to see both pre & post smartphone evolution era and having experienced the change myself being the developer, leader and now a father of my own mobility startup. So thought to analyze the trend setters which kind of will rule this new year.
So here is my Top three technology trends you all should look out for in your endeavors in this new year 2018, which as always, will offer you loads of new opportunities to rock this world. Being a part of this mobile app ecosystem I feel immense pride while writing this piece of article for all you visionaries and future mobile apprenuer.

1. Augmented Reality/ Virtual Reality:

Wiki Defines AR as :

Augmented reality (AR) is a live direct or indirect view of a physical, real-world environment whose elements are “augmented” by computer-generated or extracted real-world sensory input such as sound, video, graphics, haptics or GPS data.

As per Wiki VR is :

Virtual reality (VR) is a computer technology that uses virtual reality headsets or multi-projected environments, sometimes in combination with physical environments or props, to generate realistic images, sounds and other sensations that simulate a user’s physical presence in a virtual or imaginary environment.
Mobile AR could become the primary driver of a $108 billion VR/AR market by 2021 (underperform $94 billion, outperform $122 billion) with AR taking the lion’s share of $83 billion and VR $25 billion.
In 2017 a lot has happened in this AR where Google & Apple invested heavily to harness the true potential of it. Apple has launched ARKit & Google has come up with ARCore, for developer to innovate and create some meaningful mobile solutions for the smartphone users.
source
As AR helps in adding a digital layer over virtual information to give a more realistic and unambiguous outlook. AR intertwined apps will gradually empower retail, life science, manufacturing, and many other domains through a wide range AR apps being developed to cater these sectors.

I Feel :

AR will take a huge leap forward to further revolutionize the ever progressing gaming industry and will stretch beyond it to empower the digital marketing world where gamification will be employed to attract & acquire new consumer for brands . All marketers need to adopt this tool to target their customers beyond conventional physical marketing. With most of the marketers seeing augmented reality as a way to provide a compelling user experience, we will soon be seeing a plethora of creative AR apps alluring consumers to buy their customized offerings
Virtual Reality technologies will be more focused on the game and events sphere as it is already doing so in 2017 and will go beyond to add more evolved app usage experience to offer an elevated dose of entertainment for the gaming user.

I find:

With iPhone X, Apple is trying to change the face of AR by making it a common use case for masses. Also A whole bunch of top tech players think this technology which is also called a mixed reality or immersive environments — is all set to create a truly digital-physical blended environment for the people who are majorly consuming digital world through their mobile power house

Some of The Popular AR/VR Companies(As reported by Fast Company):

  1. Google: is using VR to analyse your your living room
  2. Snapchat: Helping their app suer to control of their own augmented reality
  3. FACEBOOK: For gathering IRL friends in VR
  4. NVIDIA: For providing the power to process VR
& Many More …..
source: statista

2. Internet Of Things: A Connected World Of H/w & Software:

source
With Gartner predicting 26 bn connected devices by 2020 which ranges from LEDs, Toys, Sports equipment, medical equipment, to controllable power sockets.We will be privileged to witness the world where everything will connected with these small devices thereby bringing information right where you are standing. Also these information will be tapped right were it is being generated to empower the data centre using Edge Computing tech.
The smart objects will be interacting with our smartphone/tablets which will eventually function like our TV remort displaying and analyzing data, interfacing with social networks to monitor “things” that can tweet or post, paying for subscription services, ordering replacement consumables and updating object firmware.

Big Tech Gaints Are Already Bullish On IoT Connected World:

  • Microsoft is powering their popular IIS(Intelligent Systems Service) by integrating IoT capabilities to their enterprise service offerings.
  • Some of the known communication technology powering IoT concept is RFID, WIFI, EnOcean, RiotOS etc….
  • Google is working on two of its ambitious project called Nest & Brillo which is circled around usage of IoT to fuel your home automation needs. Brillo is an IoT OS which enables Wi-Fi, Bluetooth Low Energy, and other Android stuffs.
Established companies such as Microsoft, with its Intelligent Systems Service, and enterprise software vendors likes SAP, with its Internet of Things Solutions, are also adding Internet of Things capabilities to their offerings.
  • Amazon launched ‘Amazon Echo’ a amazing tech which works on your voice command to answer your queries, play songs and control smart devices within certain range.

I Feel:

IoT & IoT Based Apps:

Is here to stay and will be playing a very crucial rule in helping you navigate this world with more ease & comfort, making your commuting safe, your communication smart, your shopping productive, your learning more engaging and much more.. to make your living effective and efficient. In fact, IoT is slowly becoming part of every aspect of our lives. Not only will IoT apps augment our comfort, but they will also extend us more control to simplify routine work life and personal tasks.

Internet Of Things Evolution:

Most of IoT powered devices are already relying on mobile devices to syndicate data, especially in case of consumer IoT. With the surge in overall uses of Internet of Things , I feel more mobile apps will be developed for management of these smart devices.

3. Blockchain: Powering the World Of Cryptos:

As Per Investopedia:
A blockchain is a digitized, decentralized, public ledger of all cryptocurrency transactions. Constantly growing as ‘completed’ blocks (the most recent transactions) are recorded and added to it in chronological order, it allows market participants to keep track of digital currency transactions without central recordkeeping. Each node (a computer connected to the network) gets a copy of the blockchain, which is downloaded automatically.
To know more about blockchain, please refer
  1. Blockchain Technology Part 1 : What and Why ?
  2. Smart Contract A Blockchain Innovation for Non-Techies
As per recent study by IBM
9 in 10 government firms are planning to invest in blockchain for financial transaction management, asset management, contract management and regulatory compliance purposes.
Another research by Infosys says:
One-third of banks are expected to adopt commercial blockchain in 2018.
So it is quite clear that secured transactions based mobility solution will rule the fin-tech & other industry where security lies at the core. App developers will have a crucial role to play where they will be expected to develop more innovative app solutions to cater the need for secure & connected world. Your mobile phones are generating lots of confidential informations which needs to be secured from the third party breaches. So techies gear up and pull up your socks as, I feel Blockchain-based security mechanism are expected to be developed on mobile apps in the coming years and will needed in all kinds of industries ranging from fin-tech, eCommerce, Insurance tech etc….
Blockchain powered cryptos like bitcoin, ripple, Ethereum is already a rage in the technology & investment world. It has fascinated the imagination of many tech innovators leading them to adopt blockchain tech to develop wallets & currencies and most of them are being developed on mobile devices & computer systems, thereby offerings lots of opportunities for techies to adopt it as futuristic career options.
Using the blockchain tech entrepreneurs will be developing a solutions mostly over mobile to validate transactions securely, manage contracts smartly, store digital currencies(like bitcoins ,XRP etc), manage voting, secure hassle free shopping, powering banking transactions and many more innovative solutions which will be targeted towards making consumers life more resourceful and productive eventually.

Blockchain Use Case By R3:

There are many more trends which will be disrupting the mobility world like
  • Artificial Intelligence : Where Machine learning , Deep Learning all will play a crucial role in fueling intelligence to the machines to help them make smart decisions without human interventions. Mobile chatbots is one of the prime example of one such use case of AI. Apps like Siri, Google Now are already harnessing AI technology and will be inspiring many more voice based and Images based AI innovations to be made by mobile appreneurs. Mobile data will be tapped giving it more intelligent forms by app developers to make our life smarter with time.
  • Mobile computing/Cloud computing :Based mobility solutions will be in high demand specially for big enterprises where business decisions are made based on intelligent data analytics . All these will be stored over the cloud and mobile will play a major role in harnessing the power of those data to serve consumer in real time.
Some of My Other Relevant Tech Article Which Can be Useful:
  1. All About Edge Computing- How It Is Changing The Present Past & Future Of IoT?
  2. Top 3 Technology Trends For 2018, Which Will Be A Game Changer !
  3. All You Wanted To Know About BitCoin?
  4. NLP Fundamentals: Where Humans Team Up With Machines To Help It Speak
Summary:
Having seen the world of mobility, changing from feature phone to a smartphone era I feel amazed how it has transformed the life of humans. Now we can communicate in split seconds, transact in no time, buy what we need with one touch, get entertained when & where we want, shower our love to our closed ones without being physically present and do many more things which one can imagine just over this tiny powerful device.
So as a developer and as a tech visionary you have, the greater responsibility to make sure that you are creating tools which complements user needs and impacts them deeply. It’s your duty to entertain them, educate them, and to make them feel safe & secure on the go.
Ending by, extending my sincere gratitude to all you awesome readers for showering all your love & constantly inspiring me to write more & learn more eventually.


Xiaomi and HomeKit

Xiaomi Starter Kit (image via Smart-Home Hobby)
I’ve been building my smart home over the last few years and was in the market to add sensors everywhere in an effort to improve the automations that I was able to achieve.
I previously had a couple of Philips Hue Motion sensors, and Elgato Eve Door & Window sensors, but at £35 a piece, adding these to all rooms and door would get very expensive. I was introduced to the Xiaomi ecosystem and decided to give it a try. Interestingly this is the first time that I’ve opted to buy some non native devices and rely on Homebridge for the integration. Prior to this, I’ve used HomeBridge as a way to integrate tech that I already owned.
Purchase
I got all of my kit from a site called Lightinthebox.com. This was the only site that I found that shipped to the UK and had a wide range stocked. I initially opted for:
I was that impressed with the kit that I purchased some more:
One thing to note is that the website quoted 5–8 days for shipping — this was actually more like 19, but for the price I can’t really complain.
Setup
The setup was fairly trivial. I did however need to upgrade my version of node running on my RPi3 to work with the plugin. As to not waste countless hours in node dependency hell, I’d recommend a fresh install of everything. I took a copy of my config.json file, made a note of installed plugins and completely wiped my SD card.
Follow these steps to get going (this assumes you’re on an iPhone, running iOS 11 or later)
  • Download the MiHome app and setup the gateway and configure your accessories. It doesn’t really matter what rooms the devices are placed in.
  • Open the MiHome app, tap on the gateway, then tap on the 3 dots in the top right corner.
  • Select about and then repeatedly (and quickly) tap on the blank space until three additional menu options in Chinese appear.
  • Tap the second option. This allows you to turn on local access mode. A password should appear. Make a note as you’ll need that soon.
  • Tap back and select the 3rd option. Make a note of the MAC address of the gateway. There’s a couple listed, one of the router that the gateway is connected to and one for the gateway itself. If it’s unclear which is which, try both. (If you run homebridge with the -D flag, you’ll get debug info which will let you know if you’ve connected to the gateway correctly).
  • Install the homebridge-mi-aqara plugin and input the MAC and password from the steps above into your config.json file.
  • Restart HomeBridge and your accessories should now appear.
Usage
The first thing to note is how tiny the door sensors are. Here’s an image with the Elgato Eve as a comparison. Due to the size of the Eve device and the trim around my doors, I’ve had to be creative with how I mount it.
Xiaomi (left) vs Elgato Eve (right) door sensors
The second thing to note is how quickly these sensors update within HomeKit — unscientifically I’d say this is instant. Even with the latest firmware the Elgato sensors still have a slight delay if that haven’t been triggered for a period of time. This still makes them unsuitable for certain automations, where you need a light to turn on immediately for example.
Xiaomi door sensors in HomeKit
The door sensors show up as regular sensors, along with three other accessories from the gateway; a light sensor, multi colour light and a switch. The light actually makes a pretty decent nightlight, especially as you don’t need to physically connect it to a router.
I’ve got a couple of automations setup where I use a door sensor in combination with a motion sensor to detect if somebody is entering or leaving a room. To do this I have a motion sensor on each side of the door and then use the motion as a conditional rule. For example, I want to turn on a table lamp in my daughters room when the door opens, but only between 5am and 8am. This assumes that I’m going in to her room when she is awake and that I want the light to come on with a soft glow. It also assumes that if I’m already in the room and leave during that window that I don’t want to like to come on (if for example she actually isn’t awake, or she settles back to sleep). To do this, I have a motion sensor on the landing and in her room (via a D Link Omna camera) with a rule stating that the lamp should only come on when there is motion detected on the landing. If there is motion on the landing then I must be outside of the room, therefore entering. If there’s motion in the room, then I’m leaving so the rule doesn’t trigger again.
To get the extra option I used the Elgato Eve app. Firstly setup the basic automation rules in the Apple Home app, and then add the condition using Eve.
So far, I’m really impressed with the Xiaomi system and would certainly consider adding more devices (although you can only add 30 per gateway) to my setup.

Should Regulators Force Facebook To Ship a “Start Over” Button For Users?


I don’t really understand most of the proposals to “regulate” Facebook. There are some concrete proposals on the table regarding political ads and updating antitrust for the data age, but other punditry is largely consumer advocacy kabuki. For example, blunting the data Facebook can use to target ads or tune newsfeed hurts the user experience, and there’s really no stable way to draw a line around what’s appropriate versus not. These experiences are too fluid. But while I want keep the government out of the product design business, there’s an alternate path which has merit: establish a baseline for the control a person has over their data on these systems.
Today the platforms give their users a single choice: keep your account active or delete your account. Sure, some expose small amounts of ad targeting data and let you manipulate that, but on the whole they provide limited or no control over your ability to “start over.” Want to delete all your tweets? You have to use a third party app. Want to delete all your Facebook posts? Good luck with that. Nope, once you’re in the mousetrap, there’s no way out except account suicide.
BUT is that really fair? Over multiple years, we all change. Things we said in 2011 may or may not represent us today. And these services evolve — did we think we’d be using Facebook as a primary source of news consumption and private messaging back when you were posting baby photos? Did you think they’d also own Instagram, WhatsApp, Oculus and so on when you created accounts on those services? We’re the frogs, slow boiling in the pot of water.
What if every major platform was required to have something between Create Account and Delete Account? One which allows you to keep your user name but selectively delete the data associated with the account? For Facebook, you could have a set of individual toggles to Delete All Friend Connections, Delete All Posts, Delete All Targeting Data. Each of these could be used individually or together to give you a fresh start. Maybe you want to preserve your social graph but wipe your feed? Maybe you want to keep your feed but rebuild your graph.
Or for Twitter: Delete All Likes, Delete All Tweets, Delete All Follows, Delete All Targeting Data.
Or for YouTube: Delete All Uploads, Delete All Subscriptions, Delete All Likes, Delete All Targeting Data.
The technical requirements to develop these features are only complicated in the sense of making sure you’re deleting the data everywhere it’s stored, otherwise every product already support “null” state — it looks very much like a new account. This leads me to believe that the only reason these features don’t exist today are (a) it would be bad for business and (b) actual or perceived lack of consumer demand. Anecdotally, it feels like (b) is changing — more and more people I know wipe their tweets, talk about deleting their histories, and so on. Imagine the ability to stage a “DataBoycott” by clearing your history if you think Facebook is taking liberties with your privacy and such. This is what keeps power in check.

Wednesday, January 3, 2018

2018 Best Running Headphones


As we’re getting ready to ship your Axum earbuds next month,
we still get lots of questions from customers about the sound quality.
How’s the highs? what made your bass so good?
So I’ll try to explain in 2–3 minutes how implementing Qualcomm’s technology helped us achieve CD-Quality sound.
For those of you who’ve been living under a rock, Qualcomm is the world leader in mobile technologies.
Several years ago they’ve acquired a company from the UK that changed the game in wireless audio.
What’s so special about them?
Well, let’s just say they took the CD-Quality sound and were able to reproduce it over wireless connectivity-
Yes, I know it sounds simple but it’s quite a big problem.
As we said in the past, we’re using it in Axum and that’s 1 of our secrets.
Now for those of you who want to get technical and learn more (I’ll be honest with you, once the engineer told me about it I spent the night digging into it) I’ll explain how it works:
When sending music over wireless connectivity,
it breaks as the bandwidth isn’t big enough.
So what they did?
They split those files into smaller ones,
this way you can stream it over wireless connectivity.
But the best part?
You’ll get the same wired sound quality!
As for the bass, what made it so good is:
  1. Our custom made driver, that we’ve been testing over and over for the last 14 months
  2. Maybe you saw those $1 speakers that you place on your table and suddenly the entire table starts shaking from the bass (I’ll ignore their joy-risking horrible sound quality)? as it turns into the kind of bass box for the speaker. Great, so one major factor with sound is the acoustics and shape of the earbuds and that’s why we had to do so many tests with the plastics and internal design. We knew that every 1mm change (especially with such a small product) will have a huge impact on the sound.
In production, like in life, there are no shortcuts.
You want to achieve the best?
You want to make an awesome product?
You want to beat the competition?
Well, in such case you’ll have to outwork your competition!
We knew the ONLY way to achieve success was to push our limits
and make another test, another prototype, another upgrade,
and when we thought “that’s it, Axum is ready…” to try another change,
another solution to make it even better.
And trust me, we wish we could make it on the first try and ship it.
But that’s the beautiful world we’re living it,
full of surprises and great rewards for those who go all-in with their goals.
These kind of things (and hundred more) is what turns Axum,
into the 2018 best running headphones!
In conclusion, I don’t think we ever said it,
but we are proud to have you on board!
We know that you’re real pusher, winner,
the kind of person who can’t quit and always raising the bar for others!
And that’s what Axum is all about.
Next month we start shipping and you better in get in shape!

Monday, January 1, 2018

How Do You Vote? Google Street View Calls It


Artificial intelligence is making it possible for Street Views to be

mined for insights about the economy, politics and human behavior — just as text mining has done for years.

Yellow taxi cabs in crowded traffic in New York, Jan. 10, 2017. For the first time, helped by recent advances in artificial intelligence, researchers are able to analyze large quantities of images, pulling out data that can be sorted and mined to predict things like income, political leanings and buying habits — Hiroko Masuike/The New York Times

By Steve Lohr
What vehicle is most strongly associated with Republican voting districts? Extended-cab pickup trucks. For Democratic districts? Sedans.
Those conclusions may not be particularly surprising. After all, market researchers and political analysts have studied such things for decades.
But what is surprising is how researchers working on an ambitious project based at Stanford University reached those conclusions: by analyzing 50 million images and location data from Google Street View, the street-scene feature of the online giant’s mapping service.
For the first time, helped by recent advances in artificial intelligence, researchers are able to analyze large quantities of images, pulling out data that can be sorted and mined to predict things like income, political leanings and buying habits. In the Stanford study, computers collected details about cars in the millions of images it processed, including makes and models.
“All of a sudden we can do the same kind of analysis on images that we have been able to do on text,” said Erez Lieberman Aiden, a computer scientist who heads a genomic research center at the Baylor School of Medicine. He provided advice on one aspect of the Stanford project.
For computers, as for humans, reading and observation are two distinct ways to understand the world, Lieberman Aiden said. In that sense, he said, “computers don’t have one hand tied behind their backs anymore.”
Text has been easier for AI to handle, because words have discrete characters — 26 letters, in the case of English. That makes it much closer to the natural language of computers than the freehand chaos of imagery. But image recognition technology, much of it developed by major technology companies, has improved greatly in recent years.
The Stanford project gives a glimpse at the potential. By pulling the vehicles’ makes, models and years from the images, and then linking that information with other data sources, the project was able to predict factors like pollution and voting patterns at the neighborhood level.
“This kind of social analysis using image data is a new tool to draw insights,” said Timnit Gebru, who led the Stanford research effort. The research has been published in stages, the most recent in late November in the Proceedings of the National Academy of Sciences.
Timnit Gebru, who led a research effort based at Stanford University that analyzed 50 million images and location data from Google Street View, in Boston, Dec. 29, 2017 — Cody O’Laughlin/The New York Times
In the end, the car-image project involved 50 million images of street scenes gathered from Google Street View. In them, 22 million cars were identified, and then classified into more than 2,600 categories like their make and model, located in more than 3,000 ZIP codes and 39,000 voting districts.
But first, a database curated by humans had to train the AI software to understand the images.
The researchers recruited hundreds of people to pick out and classify cars in a sample of millions of pictures. Some of the online contractors did simple tasks like identifying the cars in images. Others were car experts who knew nuances like the subtle difference in the taillights on the 2007 and 2008 Honda Accords.
“Collecting and labeling a large data set is the most painful thing you can do in our field,” said Gebru, who received her Ph.D. from Stanford in September and now works for Microsoft Research.
But without experiencing that data-wrangling work, she added, “you don’t understand what is impeding progress in AI in the real world.”
Once the car-image engine was built, its speed and predictive accuracy was impressive. It successfully classified the cars in the 50 million images in two weeks. That task would take a human expert, spending 10 seconds per image, more than 15 years.
Identifying so many car images in such detail was a technical feat. But it was linking that new data set to public collections of socioeconomic and environmental information, and then tweaking the software to spot patterns and correlations, that makes the Stanford project part of what computer scientists see as the broader application of image data.
“There has been an explosion of computer vision research, but so far the societal impact has been largely absent,” said Serge Belongie, a computer scientist at Cornell Tech. “Being able to identify what is in a photo is not science that advances our understanding of the world.”
The Stanford car project generated a host of intriguing connections, not so much startling revelations. In the most recent paper, and one published earlier in the year by the Association for the Advancement of Artificial Intelligence, these were among the predictive correlations:
— The system was able to accurately predict income, race, education and voting patterns at the ZIP code and precinct level in cities across the country.
— Car attributes (including mpg ratings) found that the greenest city in America is Burlington, Vermont, while Casper, Wyoming, has the largest per-capita carbon footprint.
— Chicago is the city with the highest level of income segregation, with large clusters of expensive and cheap cars in different neighborhoods; Jacksonville, Florida, is the least segregated by income.
— New York is the city with the most expensive cars. El Paso, Texas has the highest percentage of Hummers. San Francisco has the highest percentage of foreign cars.
Other researchers have used Google Street View data for visual clues for factors that influence urban development, ethnic shifts in local communities and public health. But the Stanford project appears to have used the most Street View images in the most detailed analysis so far.
The significance of the project, experts say, is a proof of concept — that new information can be gleaned from visual data with artificial intelligence software and plenty of human help.
The role of such research, they say, will be mainly to supplement traditional information sources like the government’s American Community Survey, the household surveys conducted by the Census Bureau.
This kind of research, if it expands, will raise issues of data access and privacy. The Stanford project only made predictions about neighborhoods, not about individuals. But privacy concerns about Street View pictures have been raised in Germany and elsewhere. Google says it handles research requests for access to large amounts of its image data on a case-by-case basis.
Onboard cameras in cars are just beginning, as auto companies seek to develop self-driving cars. Will some of the vast amounts of image data they collect be available for research or kept proprietary?
Kenneth Wachter, a professor of demography at the University of California, Berkeley, said image-based studies could be a big help now that public response rates to sample surveys are declining. An AI-assisted visual census, he said, could fill in gaps in the current data, but also provide more timely insights than the traditional census, conducted every 10 years, on hot topics in public policy like “the geography and evolution of disadvantage and opportunity.”
To Nikhil Naik, a computer scientist and research fellow at Harvard, who has used Street View images in the study of urban environments, the Stanford project points toward the future of image-fueled research.
“For the first time in history, we have the technology to extract insights from very large amounts of visual data,” Naik said. “But while the technology is exciting, computer scientists need to work closely with social scientists and others to make sure it’s useful.”

Interested for our works and services?
Get more of our update !