Programming & IT Tricks . Theme images by MichaelJay. Powered by Blogger.

Copyright

Facebook

Post Top Ad

Search This Blog

Post Top Ad

Responsive Ads Here

Archive

Post Top Ad

Contact


Editors Picks

Follow us

Post Top Ad

Fashion

Music

News

Sports

Food

Technology

Featured

Videos

Fashion

Technology

Fashion

Label

Translate

About

Translate

Sponsor

test

Weekly

Comments

Recent

Connect With us

Over 600,000+ Readers Get fresh content from FastBlog

About

Showing posts with label google. Show all posts
Showing posts with label google. Show all posts

Sunday, January 7, 2018

Why I switched from Windows to Mac for UI/UX Design?


— Seeking the perfect workstation

Windows vs Mac has always been a popular topic of debate among the techies. Probably, this article is not about comparing the two platforms rather it states five simple reasons why I switched from Windows to Mac for my UI/UX design job.
Design space is filled with a number of awesome graphic design tools. These tools have endless possibility and gives great flexibility to the designers. Photoshop, Illustrator, Corel Draw, After Effects etc being the few classic names. But when it comes to the industry, productivity becomes the most important factor. And most of the classic tools inspite of being so versatile, may not be very productive. You can refer to this article to understand why.
Most of the new age tools, especially for the UI/UX design, like Sketch App, Adobe XD, Principle etc are designed for the productivity. They may not be very versatile like the classic ones but they perform the intended tasks very efficiently. But hey! most of them are mac exclusive. The windows version is either planned for future or may not be developed.
That has become a problem for me, I wanted to work on these tools but the platform became an issue. After going through a lot of considerations I finally decided to switch to mac. The following were the five main reasons, at-least for me, that favoured mac as the better workstation for UI/UX design.

1. Accessibility To Mac Exclusive Apps

There was a time when mac store was having far less no of applications than the windows. But since the number of mac users (especially designers) have increased, the developers have started supporting macOS more aggressively than windows. Thus in order to access the new UI/UX design tools like sketch app, origami, principle, framer etc you need to have a macOS.

2. Beautiful Retina Display

Designers spent most of the time glaring at the screen, tweaking colors or making the text standout from the background image. And retina display on mac have no doubt one of the best displays in the market. So it will always be a perfect choice for the designers.

3. Portability Of the Setup

MacBook might not be the most powerful machines, but until you are playing a graphic intensive game or generating 3D images compositions, the power is useless. I am having pretty much justified use case and the power that a 13" MacBook Pro offers is enough for me. But the portability that it offers is appreciable. The high quality retina display, replacing the requirement of an extra monitor and highly sensitive track pad, replacing the requirement of mouse. They also have some of the best battery backups in the market.

4. There are too many PC’s to choose from

My laptop was really getting older and outdated, and I really wanted a new machine. But the number of options available for buying a pc is almost unlimited and as the number of options increase the decision time also increases (Hick’s law). But macs have almost limited options and the options will become more tight if you are low on budget like me 😛 . Only dilemma for me was to choose from MacBook Pro 13-inch 2016 model and 2015 model. I opted for 2015 model, because of the obvious price differences.

5. Visual Stimulus

The apple products are no doubt most beautifully designed. And designers love to be visually stimulated and constantly surrounded by beautiful things. That was also one of the driving force for me to buy a MacBook.

Finally MacBook Pro (13-inch, Model 2015) became my primary workstation and now it has been more than six months I am using it. There are certain things that I really liked and disliked about the machine. Things I liked are the standby mode, graphics performance, gestures and accuracy of track pad, multiple desktop feature, portability, clarity of screen etc. And the things that I really hate are the lack of compatibility with other devices, Finder App, storage issues etc.
For sure the perfect workstation still doesn’t exist. And you have to make choices from the available options only. I found MacBook Pro, more close to what I was looking for .

Friday, January 5, 2018

How we recreated Amazon Go in 36 hours


John Choi, me, our project apparatus, Ruslan Nikolaev, and Soheil Hamidi at our demo!
My colleagues and I wanted to create something that would make people go “wow” at our latest hackathon.
Because imitation is the sincerest form of flattery and IoT is incredibly fun to work with, we decided to create our own version of Amazon Go.
Before I explain what it took to make this, here’s the 3 minute demo of what we built!
There were four of us. Ruslan, a great full-stack developer who had experience working with Python. John, an amazing iOS developer. Soheil, another great full-stack developer who had experience with Raspberry Pi. And finally, there was me, on the tail end of an Android developer internship.
I quickly realized that there were a lot of moving parts to this project. Amazon Go works on the basis of real-time proximity sensors in conjunction with a real-time database of customers and their carts.
We also wanted to take things a step further and make the entry/exit experience seamless. We wanted to let people enter and exit the store without needing to tap their phones.
In order to engage users as a consumer-facing product, our app would need a well-crafted user interface, like the real Amazon Go.
On the day before the hackathon, I put together a pseudo-design doc outlining what we needed to do within the 36 hour deadline. I incorporated the strengths of our team and the equipment at hand. The full hastily assembled design doc can be seen below.

There were six main components to EZShop, our version of Amazon Go.
A quick diagram I whipped up visualizing the components of this project

The Kairos Facial Recognition API

The Kairos facial recognition API was a fundamental component for us. It abstracted the ability to identify and store unique faces. It had two APIs that we used: /enroll and /verify.
/enroll is described as:
Takes a photo, finds the faces within it, and stores the faces into a gallery you create.
We enrolled all new customers into a single “EZShop” gallery. A unique face_id attribute would be returned and stored with the customer’s registered name in our real-time database.
When we wanted to verify a potential customer’s image, we would POST it to the /verify endpoint. This would return the face_id with the highest probability of a match.
In a real-world implementation, it probably would have been a better idea to use a natively implemented facial recognition pipeline with TensorFlow instead of a network API. But given our time constraints, the API served us very well.

The Realtime Firebase Database

The Firebase database was another fundamental piece to our puzzle. Every other component interacted with it in real time. Firebase allows customized change listeners to be created upon any data within the database. That feature, coupled with the easy set-up process, made it a no brainer to use.
The schema was incredibly simple. The database stored an array of items and an array of users. The following is an example JSON skeleton of our database:
{
  "items": [
    {
      "item_id": 1,
      "item_name": "Soylent",
      "item_stock": 1,
      "price": 10
    }
  ],
  "users": [
    {
      "face_id": 1,
      "name": "Subhan Nadeem",
      "in_store": false,
      "cart": [
        1
      ]
    }
  ]
}
New users would be added to the array of users in our database after registering with the Kairos API. Upon entry or exit, the customer’s boolean in_store attribute would be updated, which would be reflected in the manager and personal app UIs.
Customers picking up an item would result in an updated item stock. Upon recognizing which customer picked up what item, the item’s ID would be added to the customer’s items_picked_up array.
I had planned for a cloud-hosted Node/Flask server that would route all activity from one device to another, but the team decided that it was much more efficient (although more hacky) for everybody to work directly upon the Firebase database.

The Manager and Personal Customer Apps

John, being the iOS wizard that he is, finished these applications in the first 12 hours of the hackathon! He really excelled at designing user-friendly and accessible apps.

The Manager App


This iPad application registered new customers into our Kairos API and Firebase database. It also displayed all customers in the store and the inventory of store items. The ability to interact directly with the Firebase database and observe changes made to it (e.g. when a customer’s in_store attribute changes from true to false) made this a relatively painless process. The app was a great customer-facing addition to our demo.

The Personal Shopping App


Once the customer was registered, we would hand a phone with this app installed to the customer. They would log in with their face (Kairos would recognize and authenticate). Any updates to their cart would be shown on the phone instantly. Upon exiting the store, the customer would also receive a push notification on this phone stating the total amount they spent.

The Item Rack, Sensors, and Camera

Soheil and Ruslan worked tirelessly for hours to perfect the design of the item shelf apparatus and the underlying Pi Python scripts.
The item rack apparatus. Three items positioned in rows, a tower for the security camera, and ultrasonic sensors positioned at the rear
There were three items positioned in rows. At the end of two rows, an ultrasonic proximity sensor was attached. We only had two ultrasonic sensors, so the third row had a light sensor under the items, which did not work as seamlessly. The ultrasonic sensors were connected to the Raspberry Pi that processed the readings of the distance from the next closest object via simple Python scripts (either the closest item or the end of the rack). The light sensor detected a “dark” or “light” state (dark if the item was on top of it, light otherwise).
When an item was lifted, the sensor’s reading would change and trigger an update to the item’s stock in the database. The camera (Android phone) positioned at the top of the tower would detect this change and attempt to recognize the customer picking up the item. The item would then instantly be added to that customer’s cart.

Entrance and Exit Cameras

I opted to use Android phones as our facial recognition cameras, due to my relative expertise with Android and the easy coupling phones provide when taking images and processing them.
The phones were rigged on both sides of a camera tripod, one side at the store’s entrance, and the other at the store exit.
A camera tripod, two phones, and lots of tape
Google has an incredibly useful Face API that implements a native pipeline for detecting human faces and other related useful attributes. I used this API to handle the heavy lifting for facial recognition.
In particular, the API provided an approximate distance of a detected face from the camera. Once a customer’s face was within a close distance, I would take a snapshot of the customer, verify it against the Kairos API to ensure the customer existed in our database, and then update the Firebase database with the customer’s in-store status.
I also added a personalized text-to-speech greeting upon recognizing the customer. That really ended up wowing everybody who used it.
The result of this implementation can be seen here:
Once the customer left the store, the exit-detection state of the Android application was responsible for retrieving the items the customer picked up from the database, calculating the total amount the customer spent, and then sending a push notification to the customer’s personal app via Firebase Cloud Messaging.

Of the 36 hours, we slept for about 6. We spent our entire time confined to a classroom in the middle of downtown Toronto. There were countless frustrating bugs and implementation roadblocks we had to overcome. There were some bugs in our demo that you probably noticed, such as the cameras failing to recognize several people in the same shot.
We would have also liked to implement additional features, such as detecting customers putting items back on the rack and adding a wider variety of items.
Our project ended up winning first place at the hackathon. We set up an interactive booth for an hour (the Chipotle box castle that can be seen in the title picture) and had over a hundred people walk through our shop. People would sign up with a picture, log into the shopping app, walk into the store, pick up an item, walk out, and get notified of their bill instantly. No cashiers, no lines, no receipts, and a very enjoyable user experience.
Walking a customer through our shop
I was proud of the way our team played to each individual’s strengths and created a well put-together full-stack IoT project in the span of a few hours. It was an incredibly rewarding feeling for everybody, and it’s something I hope to replicate in my career in the future.
I hope this gave you some insight into what goes on behind the scenes of a large, rapidly prototyped, and hacky hackathon project such as EZShop.

Thursday, January 4, 2018

I Was Supposed to be an Architect


I’m leading a VR development studio, but the truth is I’ve been navigating a series of epic career learning curves that have taken me far outside of my comfort zone, and I wouldn’t have it any other way.
Mainstreet, Mall or Modem
On my quest to start sharing more about our process and lessons learned on the virtual frontier, I thought I’d start with a bit of background on how I arrived here in the first place.
I studied and practiced architecture, but I’ve been fascinated with virtual technologies as far back as I can remember. In fact, my architectural thesis project in grad school (image above) focused on how VR and digital technologies would someday revolutionize architecture — specifically retail architecture. This was 17 years ago, when VR was very expensive, and largely inaccessible, but the brilliant pioneers at work innovating in this field were demonstrating the massive potential. It was only a matter of time before VR would find a way to mainstream.
Like so many other physical manifestations, from music to books and beyond, I believe buildings are subject to a similar digital transcendence. It’s already happening in a pretty big way, and this is just the beginning of a major architectural transformation that might take another decade or two to fully surface, but I digress… I’m saving this interest for a future pivot, and almost certainly another epic learning curve to go with it.
I tried using Everquest to visualize architecture.
I had a level 47 Dark Elf Shadow Knight in Everquest, but spent most of my time wandering around, exploring the environments. What I really wanted to do was import my own architectural models to explore them inside the game.
If they could have such elaborate dungeons and forts to explore in Everquest, with people from all around the world working together in the game virtually, why couldn’t the same technology also be used to visualize a new construction project, with the architect, building owner, and construction team exploring or collaborating on the design together?
This quest to visualize architecture in a real-time world became a ‘first principle’ in my career path that I’ve been chasing ever since.
I met my amazing and tremendously patient wife, Kandy, in grad school, and after studying architecture together in Europe and graduating, we practiced architecture for some time before starting our own firm, Crescendo Design, focused on eco-friendly, sustainable design principles.
Then one day in 2006, I read an article in Wired about Second Life — a massively multi-player world where users could create their own content. Within an hour, I was creating a virtual replica of a design we had on the boards at the time. I had to use the in-world ‘prims’ to build it, but I managed.
I was working in a public sandbox at the time, and when I had the design mostly finished, I invited the client in to explore it. They had 2 young kids, who were getting a huge kick out of this watching over their parent’s shoulders as they walked through what could soon be their new home.
The Naked Lady, the Sheriff Bunny, and Epic Learning Curve #1.
We walked in the front door, when suddenly a naked woman showed up and started blocking the doorways. I reported her to the ‘Linden’ management, and a little white bunny with a big gold sheriff’s badge showed up and kicked her out. “Anything else I can help with?” Poof.. the bunny vanished and we continued our tour. That’s when I realized I needed my own virtual island (and what an odd place Second Life was).
But then something amazing happened that literally changed my career path, again.
I left one of my houses in that public sandbox overnight. When I woke up in the morning and logged in, someone had duplicated the house to create an entire neighborhood — and they were still there working on it.
Architectural Collaboration on Virtual Steroids
I walked my avatar, Keystone Bouchard, into one of the houses and found a group of people speaking a foreign language (I think it was Dutch?) designing the kitchen. They had the entire house decorated beautifully.
One of the other houses had been modified by a guy from Germany who thought the house needed a bigger living room. He was still working on it when I arrived, and while he wasn’t trained in architecture, he talked very intelligently about his design thinking and how he resolved the new roof lines.
I was completely blown away. This was architectural collaboration on virtual steroids, and opened the door to another of the ‘first principle’ vision quests I’m still chasing. Multi-player architectural collaboration in a real-time virtual world is powerful stuff.
Steve Nelson, Jon Brouchoud, and Carl Bass delivering Keynote at Autodesk University 2006
One day Steve Nelson’s avatar, Kiwini Oe, visited my Architecture Island in Second Life and offered me a dream job designing virtual content at his agency, Clear Ink, in Berkeley, California. Kandy and I decided to relocate there from Wisconsin, where I enjoyed the opportunity to build virtual projects for Autodesk, the U.S. House of Representatives, Sun Microsystems and lots of other virtual installations. I consider that time to be one of the most exciting in my career, and it opened my eyes to the potential for enterprise applications for virtual worlds.
Wikitecture
I started holding architectural collaboration experiments on Architecture Island. We called it ‘Wikitecture.’ My good friend, Ryan Schultz, from architecture school suggested we organize the design process into a branching ‘tree’ to help us collaborate more effectively.
Studio Wikitecture was born, and we went on to develop the ‘Wiki Tree’ and one of our projects won the Founder’s Award and third place overall from over 500 entries worldwide in an international architecture competition to design a health clinic in Nyany, Nepal.
These were exciting times, but we were constantly faced with the challenge that we weren’t Second Life’s target audience. This was a consumer-oriented platform, and Linden Lab was resolutely and justifiably focused on growing their virtual land sales and in-world economy, not building niche-market tools to help architects collaborate. I don’t blame them — more than 10 years after it launched, it still has a larger in-world economy of transactions of real money than some small countries.
We witnessed something truly extraordinary there — something I haven’t seen or felt since. Suffice it to say, almost everything I’ve done in the years since have been toward my ultimate goal of someday, some way, somehow, instigating the conditions that gave rise to such incredible possibilities. We were onto something big.

Monday, January 1, 2018

New Year Offer: The iPhone is available at Rs 9000


 If you are considering a smartphone in the new year, then we are giving them a best choice of the phone. E-commerce website is getting a discount of Rs 9010 on the iPhone 8's 64GB variant on Amazon. You can buy this Rs. 64000 phone from Amazon at Rs. 54,999.



On the 32GB variant of the unseen iPhone SE, there was also a discount of 8 thousand rupees. After the discount, it was being sold at Rs. 17,999. However, the price of this phone has now been reduced to Rs 18,999. Customers can buy Apple iPhone 8 in space gray, gold and silver color variants.



Apple has made several improvements in September with an 10th anniversary, iPhone 7 with iPhone X and iPhone 7 Plus in an event in California. In the iPhone 8 and iPhone 8 Plus, there is a new design with a front and back glass.



The iPhone 8 and iPhone 8 Plus have a 12-megapixel rear camera, as well as a dual camera setup in the iPhone 8 Plus. It also has a 12-megapixel telephoto camera. It has the best features for video recording.

Sunday, March 23, 2014

Microsoft just exposed email's ugliest secret

Email is more broken than you think


If you're hiding something from Microsoft, you'd better not put it on Hotmail.
It came out yesterday that the company had read through a user's inbox as part of an internal leak investigation. Microsoft has spent today in damage-control mode, changing its internal policies and rushing to point out that they could have gotten a warrant if they’d needed one. By all indications, the fallout is just beginning.
Our data is held on their servers, routed by their protocols
But while Microsoft is certainly having a bad week, the problem is much bigger than any single company. For the vast majority of people, our email system is based on third-party access, whether it's Microsoft, Google, Apple or whoever else you decide to trust. Our data is held on their servers, routed by their protocols, and they hold the keys to any encryption that protects it. The deal works because they're providing important services, paying our server bills, and for the most part, we trust them. But this week's Microsoft news has chipped away at that trust, and for many, it's made us realize just how frightening the system is without it.
They own the servers, and there's no legal or technical safeguard to keep them from looking at what's inside
We've known for a while that email providers could look into your inbox, but the assumption was that they wouldn't. Even a giant like Microsoft is likely to sustain lasting damage, simply because there are so many options for free web-based email. Why stick with Microsoft if you trust Apple or Google more? But while companies have created a real marketplace for privacy and trust, you'll find the same structural problems at every major service. Ad-supported email means companies have to scan your inbox for data, so they need access to every corner of your inbox. (That's been the basis of Microsoft's Google-bashing "Scroogled" campaign.) Free email also means someone else is hosting it; they own the servers, and there's no legal or technical safeguard to keep them from looking at what's inside.
"We may access or disclose information ... to protect the rights or property of Microsoft."
A close look at company privacy policies only underlines the fact. As Microsoft pointed out its initial statement, "Microsoft’s terms of service make clear our permission for this type of review." Look at the company privacy policy, and you’ll see that's true: "We may access or disclose information about you, including the content of your communications, in order to ... protect the rights or property of Microsoft." That’s a straightforward description of what happened in the Hotmail case.
You’ll find similar language in the privacy policies from Yahoo and Google. Yahoo reserves the right to look through your emails to "protect the rights, property, or personal safety of Yahoo, its users and the public." Google’s language is nearly identical, saying it will access user data "if we have a good-faith belief that access, use, preservation or disclosure of the information is reasonably necessary to … protect against harm to the rights, property or safety of Google." Apple is a little better, but not much, promising to disclose user content "if we determine that for purposes of national security, law enforcement, or other issues of public importance, disclosure is necessary or appropriate." What counts as public importance, exactly?
What’s worse, the current laws won’t do anything to stop them. For standard law enforcement, it takes a warrant to read a person's email — but there's no such restriction on hosting providers. Peeking into your clients' inbox is bad form, but it's perfectly legal. Even if the rights weren't reserved in the terms of service, it's not clear there are even grounds for a lawsuit. Without stronger privacy laws, all companies have to worry about is bad PR.
Peeking into your clients' inbox is bad form, but it's perfectly legal
Microsoft's mole hunt isn't unprecedented either. There have been LOVEINT-style abuses of sysadmin access, as when a Google engineer was fired for spying on friends' chat logs. Last year, Harvard searched its own professors' email accounts as part of a cheating investigation. (The dean behind the search stepped down a few months later.) But those are just the instances we're aware of. In all likelihood, there are dozens of similar incidents that were simply never made public, encouraged by the open nature of third-party hosting. As long as the access is legal and technically feasible, there's no reason to think it will stop.
As long as the access is legal, there's no reason to think it will stop
Anyone living a modern and complicated life over email is left in an awkward place. The crypto crowd has an easy answer: use end-to-end encryption, locking up emails with GnuPG and online chats with programs like Cryptocat. You can hold your own keys, making sure no one can decrypt the message but the person you're sending it to, and count on open-source code reviews to expose anyone who tries to slip a backdoor into the code.
It's a good system and it works, but for most users, it's still a bunch of extra inconvenience for no obvious benefit. In the end, it's easier to blame Microsoft for violating our trust and move onto the next company, with the same data practices and the same terms of service. With Google, Apple, Yahoo, and countless other free webmail services waiting in the wings, there are plenty of options to choose from. They'd never do a thing like this... right?

Interested for our works and services?
Get more of our update !