Programming & IT Tricks . Theme images by MichaelJay. Powered by Blogger.

Copyright

Facebook

Post Top Ad

Search This Blog

Post Top Ad

Responsive Ads Here

Archive

Post Top Ad

Contact


Editors Picks

Follow us

Post Top Ad

Fashion

Music

News

Sports

Food

Technology

Featured

Videos

Fashion

Technology

Fashion

Label

Translate

About

Translate

Sponsor

test

Weekly

Comments

Recent

Connect With us

Over 600,000+ Readers Get fresh content from FastBlog

About

Showing posts with label Data Analysis. Show all posts
Showing posts with label Data Analysis. Show all posts

Thursday, January 25, 2018

9 Things Only iPhone Power Users Take Advantage of on Their Devices


For many, it’s difficult to imagine life before smartphones.
At the same time, it’s hard to believe that the original Apple iPhone, considered a genuine unicorn at the time thanks to its superior experience and stunning, rainbow-worthy display, released over 10 years ago.
Even though the iPhone is older than most grade school students, some of its capabilities remain a mystery to the masses.
Sure, we all hear about the latest, greatest features, but what about those lingering in the background just waiting to be discovered?
Getting your hands wrapped around those capabilities is what separates you, a soon-to-be power user, from those who haven’t truly unleashed its full potential.
So, what are you waiting for? Release that unicorn and let it run free like the productivity powerhouse it was always meant to be.
Here are 9 ways to get started.

1. Get Back Your Closed Tabs

We’ve all done it. While moving between tabs or screens, our fingers tap the little “x” and close an important browser tab.
With the iPhone, all is not lost. You can get that epic unicorn meme back from oblivion!
The included Safari browser makes recovering a recently closed tab a breeze. Learn more about the process here: Reopen Tabs

2. Smarter Photo Searching

Searching through photos hasn’t always been the most intuitive process…until now.
Before, you had to rely on labels and categories to support search functions. But now, thanks to new machine learning supported features, the photos app is more powerful than ever.
The iPhone has the ability to recognize thousands of objects, regardless of whether you’ve identified them. That means you can search using keywords to find images with specific items or those featuring a particular person.
Just put the keyword in the search box and let the app do the hard part for you.

3. Find Out Who’s Calling

Sometimes, you can’t simply look at your iPhone’s screen to see who’s calling. Maybe you are across the room, are driving down the road, or have the phone safely secured while jogging.
Regardless of the reason, just grabbing it quickly isn’t an option. But that doesn’t mean you want to sprint across the room, pull your car over, or stop your workout just to find out it’s a robo-dial.
Luckily, you can avoid this conundrum by setting up Siri to announce who’s calling. Then you’ll always know if you actually want to stop what you’re doing to answer before you break away from the task at hand.
See how here: Siri Announce Calls

4. Stop Squinting to Read Fine Print

In the business world, fine print is the donkey we all face on a regular basis. You can’t sign up for a service or look over a contract without facing some very small font sizes.
Thanks to the iPhone, you don’t have to strain your eyes (and likely give yourself a headache) to see everything you need to see when faced with fine print on paper. Just open the Magnifier, and your camera is now a magnifying glass.
See how it’s done here: Magnifier

5. Clear Notifications En Masse

Yes, notifications can be great. They let you know what’s happening without having to open every app individually.
But, if you haven’t tended to your iPhone for a while, they can also pile up quick. And who has the time to handle a huge listed of notifications one at a time?
iPhone’s that featured 3D Touch (iPhone 6S or newer) actually have the ability to let you clean all of your notifications at once.
Clear out here screen by following the instructions here: Clear Notifications

6. Close Every Safari Tab Simultaneously

iPhones running iOS 10 can support an “unlimited” number of Safari tabs at once. While this is great if you like keeping a lot of sites open, it can also get out of hand really quickly if you don’t formally close the ones you don’t need.
If you have more tabs open than stars in the sky, you can set yourself free and close them all at once.
To take advantage of this virtual reset, see the instructions here: Close All Safari Tabs

7. Request Desktop Site

While mobile sites are handy for the optimized experience, they can also be very limiting. Not every mobile version has the features you need to get things done, but requesting the desktop version wasn’t always the easiest process.
Now, you can get to the full desktop site with ease. Just press and hold on the refresh button at the top of the browser screen, and you’ll be given the option to request the desktop site.

8. Get a Trackpad for Email Cursor Control

There you are, doing the daily task of writing out emails or other long messages. As you go along, you spot it; it’s a mistake a few sentences back.
Trying to use a touchscreen to get back to the right place isn’t always easy, especially if the error rests near the edge of the screen.
Now, anyone with a 3D Touch enabled device can leave that frustration in the past. The keyboard can now be turned into a trackpad, giving you the cursor control you’ve always dreamed of having, the equivalent of finding a unicorn at the end of a rainbow.
Learn how here: Keyboard Trackpad

9. Force Close an Unresponsive App

If a single app isn’t doing its job, but the rest of your phone is operating fine, you don’t have to restart your phone to get the app back on track.
Instead, you can force close the unresponsive app through the multitasking view associated with recently used apps that are sitting in standby mode.
Check out how it’s done here: Force Close an App

Be a Unicorn in a Sea of Donkeys

Get my very best Unicorn marketing & entrepreneurship growth hacks.

Monday, January 22, 2018

A Eulogy for the Headphone Jack


Sometime in the mid-2000s, I was a freelance web developer in Philadelphia with some pretty crappy health insurance. I started having occasional heart palpitations, like skipped heart beats. My doctor said it was probably not serious, but she could do tests to rule out very unlikely potential complications for about $1,000. That seemed pretty expensive to rent a portable EKG for a single day, so I googled around for some schematics. Turned out you could build a basic three-lead EKG with about $5 worth of Radio Shack parts (I no longer have the exact schematic, but something like this). I didn’t really understand what the circuit did, but I followed the directions and soldered it together on some protoboard, connected a 9V battery, and used three pennies as electrodes that I taped to my chest. I hooked the output of the device to my laptop’s line in and pressed ‘record’.
screenshot of my heartbeat in Audacity
Audacity displayed the heartbeat signal live as it recorded. Sure enough, I was having pretty common/harmless Premature Ventricular Contractions. There’s one on the right side of the screenshot above.
Calling the 1/8th inch connectors you’d find on pretty much every piece of consumer electronics until recently “audio jacks” does them a disservice. It’s like calling your car a “grocery machine”. Headphone and microphone ports are, at their most basic, tools for reading and producing voltages precisely and rapidly over time.
My homemade EKG is a voltage converter. Electrodes attached to points around my heart measure tiny differences in voltage produced by signals that keep it beating. Those measured signals are amplified to about plus or minus 2 volts. That new voltage travels through an audio cable to the “Line In” on my sound card.
Sound cards happen to carry sound most of the time, but they are perfectly happy measuring any AC voltage from -2 to +2 volts at 48,000 times per second with 16 bits of accuracy. Put another way, your microphone jack measures the voltage on a wire (two wires for stereo) every 0.2 milliseconds, and records it as a value between 0 and 65,535. Your headphone jack does the opposite, by applying a voltage between -2 and +2 to a wire every 0.2 milliseconds, it creates a sound.

To any headphone jack, all audio is raw in the sense that it exists as a series of voltages that ultimately began as measurements by some tool, like a microphone or an electric guitar pickup or an EKG. There is no encryption or rights management, no special encoding or secret keys. It’s just data in the shape of the sound itself, as a record of voltages over time. When you play back a sound file, you feed that record of voltages to your headphone jack. It applies those voltages to, say, the coil in your speaker, which then pushes or pulls against a permanent magnet to move the air in the same way it originally moved the microphone whenever the sound was recorded.
Smartphone manufacturers are broadly eliminating headphone jacks going forward, replacing them with wireless headphones or BlueTooth. We’re going to all lose touch with something, and to me it feels like something important.
The series of voltages a headphone jack creates is immediately understandable and usable with the most basic tools. If you coil up some copper, and put a magnet in the middle, and then hook each side of the coil up to your phone’s headphone jack, it would make sounds. They would not be pleasant or loud, but they would be tangible and human-scale and understandable. It’s a part of your phone that can read and produce electrical vibrations.
Without that port, we will forever be beholden to device drivers between our sounds and our speakers. We’ll lose reliable access to an analog voltage we could use to drive any magnetic coil on earth, any pair of headphones. Instead, we’ll have to pay a toll, either through dongles or wireless headphones. It will be the end of a common interface for sound transfer that survived more or less unchanged for a century, the end of plugging your iPod into any stereo bought since WWII.
Entrepreneurs and engineers will lose access to a nearly universal, license-free I/O port. Independent headphone manufacturers will be forced into a dongle-bound second-class citizenry. Companies like Square — which made brilliant use of the headphone/microphone jack to produce credit card readers that are cheap enough to just give away for free — will be hit with extra licensing fees.
Because a voltage is just a voltage. Beyond an input range, nobody can define what you do with it. In the case of the Square magstripe reader, it is powered by the energy generally used to drive speakers (harvesting the energy of a sine wave being played over the headphones), and it transmits data to the microphone input.
There’s also the HiJack project, which makes this whole repurposing process open source and general purpose. They provide circuits that cost less than $3 to build that can harvest 7mW of power from a sound playing out of an iPhone’s headphone jack. Because you have raw access to some hardware that reads and writes voltages, you can layer an API on top of it to do anything you want, and it’s not licensable or limited by outside interests, just some reasonably basic analog electronics.
I don’t know exactly how losing direct access to our signals will harm us, but doesn’t it feel like it’s going to somehow? Like we may get so far removed from how our devices work, by licenses and DRM, dongles and adapters that we no longer even want to understand them? There’s beauty in the transformation of sound waves to electricity through a microphone, and then from electricity back to sound again through a speaker coil. It is pleasant to understand. Compare that to understanding, say, the latest BlueTooth API. One’s an arbitrary and fleeting manmade abstraction, the other a mysterious and dazzlingly convenient property of the natural world.

So, if you’re like me and you like headphone jacks, what can you do? Well, you could only buy phones that have them, which I think you’ll be able to do for a couple years. Vote with your dollar!
You can also tell companies that are getting rid of headphone jacks that you don’t like it. That your mother did not raise a fool. That aside from maybe water-resistance, there’s not a single good reason you can think of to give up your headphone jack. Tell them you see what they’re up to, and you don’t like it. You can say this part slightly deeper, through gritted teeth, if you get to say it aloud. Or, just italicize it so they know you are serious.

Wednesday, January 17, 2018

The Things Junior UX Designers Should Do More Of (Not Just Design)


As a designer starting out in the beginning of your career, you may not know what to expect during your first job. You could be given lots of work and because you are the new designer on team, you do things without question. You might think you are expected to know everything because nobody said you should seek out the things you need to help you.
Having worked in the design industry almost every summer in college, I’ve learned a thing or two about how a new designer, such as myself, can navigate through challenges and learn in environments based on implied messages of what we should or shouldn’t do. Knowing the basic tools and techniques of good design is essential, but it’s the small details surrounding how we work which can help us progress and open doors. Here are a few tips that growing designers should take into consideration during their first year on the job to accelerate career growth.

Asking for Help Doesn't Make You Stupid

It’s okay to ask for help, but the issue that some designers may allude to when they say asking for help is a big no-no is the phrasing. Instead of directly asking for help, ask for feedback and advice. If you need help with doing research, join a research session. If you need help with moving forward in a project, ask designers to join you in prioritizing ideas. This will provide you with direction. Instead of receiving a hard-cut answer, you receive validation and perspective, things that will help you develop your own point of view. Designers don’t receive answers, they problem solve to get there.

Saying “No” is better than saying “Yes” all the time*

Note the asterisk. You are in control of what you want to do. You can decide when you reply to that e-mail or if you want to go that meeting. We are often given so many things to do that we can’t do all of them, yet we think we have to. Many designers, especially in the beginning of their career, do everything they are told to do, and this distracts them from the work they need to do the most. Decide on what is most important to help get your work done and prioritize.
Don’t say yes for the things that get in the way of producing quality work.
Delegating tasks and prioritizing is hard, but if you can do that, you will get so much done (and more). It’s okay to say no for valid reasons because it tells people that you know what’s important.

Speak up

During a critique, we are excepted to provide feedback for our peers, but not everyone does it because they might be self concious of their thoughts, or they don’t make the effort to help. Don’t be selfish with ideas. Ideas are meant to be expressed and help our fellow designers design for the people. Feedback is a gift. Feedback is what results in more iterations and better experiences.

Take Breaks

I used to work hard constantly, whether it was at home, with friends and family…You name it. But then I realized, without fault, I will be working for the rest of my life and work isn’t ever really “done”. I was taking the time to work on something fleeting, when I could have been spending time with the people I loved and the things I loved to do outside of work. Also, too much work can increase stress which can increase burnout. It makes sense to do as much work as you can to get to a certain job or rank, but that takes time. Just do what you can and relax when you feel overworked or exausted. In the end, health is more important than work because without health, we can’t work.

Be Present

As tempting as it is to work from home, especially for people who have the privilege of doing so all the time, it is crucial to be present. Even if the quality of work has not been affected, as designers, collaboration is such an important aspect of the way we do things. Being present in the office can make all the difference, especially when working with the people on your team. It’s not a team if everyone isn’t present.

If you have any questions about design, message me on LinkedIn and I’ll write about it!

Links to some other cool reads:

Tuesday, January 16, 2018

How to catch a criminal using only milliseconds of audio


Scientists can tell far more from your recorded voice than you might think. Image: Pixabay
Simon Brandon, Freelance journalist

A prankster who made repeated hoax distress calls to the US Coast Guard over the course of 2014 probably thought they were untouchable. They left no fingerprints or DNA evidence behind, and made sure their calls were too brief to allow investigators to triangulate their location.
Unfortunately for this hoaxer, however, voice analysis powered by AI is now so advanced that it can reveal far more about you than a mere fingerprint. By using powerful technology to analyse recorded speech, scientists today can make confident predictions about everything from the speaker’s physical characteristics — their height, weight, facial structure and age, for example — to their socioeconomic background, level of income and even the state of their physical and mental health.
One of the leading scientists in this field is Rita Singh of Carnegie Mellon University’s Language Technologies Institute. When the US Coast Guard sent her recordings of the 2014 hoax calls, Singh had already been working in voice recognition for 20 years. “They said, ‘Tell us what you can’,” she told the Women in Tech Show podcast earlier this year. “That’s when I started looking beyond the signal. How much could I tell the Coast Guard about this person?”
Rita Singh is an expert in speech recognition
What your voice says about you
The techniques developed by Singh and her colleagues at Carnegie Mellon analyse and compare tiny differences, imperceptible to the human ear, in how individuals articulate speech. They then break recorded speech down into tiny snippets of audio, milliseconds in duration, and use AI techniques to comb through these snippets looking for unique identifiers.
Your voice can give away plenty of environmental information, too. For example, the technology can guess the size of the room in which someone is speaking, whether it has windows and even what its walls are made of. Even more impressively, perhaps, the AI can detect signatures left in the recording by fluctuations in the local electrical grid, and can then match these to specific databases to give a very good idea of the caller’s physical location and the exact time of day they picked up the phone.
This all applies to a lot more than hoax calls, of course. Federal criminal cases from harassment to child abuse have been helped by this relatively recent technology. “Perpetrators in voice-based cases have been found, have confessed, and their confessions have largely corroborated our analyses,” says Singh.
Portraits in 3D
And they’re just getting started: Singh and her fellow researchers are developing new technologies that can provide the police with a 3D visual portrait of a suspect, based only on a voice recording. “Audio can us give a facial sketch of a speaker, as well as their height, weight, race, age and level of intoxication,” she says.
But there’s some way to go before voice-based profiling technology of this kind becomes viable in a court. Singh explains: “In terms of admissibility, there will be questions. We’re kind of where we were with DNA in 1987, when the first DNA-based conviction took place in the United States.”
This has all proved to be bad news for the Coast Guard’s unsuspecting hoaxer. Making prank calls to emergency services in the US is regarded as a federal crime, punishable by hefty fines and several years of jail time; and usually the calls themselves are the only evidence available. Singh was able to produce a profile that helped the Coast Guard to eliminate false leads and identify a suspect, who they hope to bring a prosecution soon.
Given the current exponential rate of technological advancement, it’s safe to say this technology will become much more widely used by law enforcement in the future. And for any potential hoax callers reading this: it’s probably best to stick to the old cut-out newsprint and glue method for now. Just don’t leave any fingerprints.
Have you read?

Monday, January 15, 2018

Apple Will Reject Your Subscription App if You Don’t Include This Disclosure


Have you read Paid Applications Agreement, Schedule 2, Section 3.8(b)?

If you’ve ever submitted an app to the App Store, you know the frustration when Apple rejects your submission. Even more so when you thought you’d followed all the rules. As it turns out, Apple can bury requirements wherever they want, and it’s your burden to keep up.
About a year ago, Apple started rejecting apps that didn’t comply with Schedule 2, Section 3.8(b) of the Paid Applications Agreement, a verbose list of self-evident truths about subscriptions. The Paid Applications Agreement is a 37-page document that you had to agree to before you could submit your app. It is only available via iTunes Connect in the form of downloadable PDF.
The actual contents of Schedule 2, Section 3.8(b):
I really like the part about privacy policies.
3.8(b) requires that you “clearly and conspicuously disclose to users” all of the above bullets. The first few items seem harmless enough but then we start to get off into the weeds.
Apple wants you to reproduce, “clearly and conspicuously”, all the details of auto-renewing subscriptions. This information should be part of the standard StoreKit subscription purchase flow. None of these bullets have anything app specific to them. They are just boilerplate legalese.
iOS’s purchase UI, more than enough information.
Apple has an iOS level user interface flow for in-app purchases that is quite good as of iOS 11. This view already covers most of the in-the-weeds bullets, except telling users about the 24-hour renewal policy.
Requiring every developer to implement their version of 3.8(b) is costly and creates a fractured experience for the user. Apple should be putting it in the standard sheet. But it’s Apple’s walled garden. When they say jump, you say “fine, whatever.”

How to Comply With 3.8(b)

According to recent rejections that I’ve seen (as of Jan. 8th, 2018), reviewers are being more particular about what your purchase flow requires. From a recent rejection:
Adding the above information to the StoreKit modal alert is not sufficient; the information must also be displayed within the app itself, and it must be displayed clearly and conspicuously during the purchase flow without requiring additional action from the user, such as opening a link.
All of the information in 3.8(b) must be “displayed clearly and conspicuously during the purchase flow without requiring additional action from the user, such as opening a link.” Your beautiful and compact purchase flow must include in it, somewhere, nine bullets written by a lawyer.
Confide, recently updated, achieved it with the following:
According to one reviewer, being below the fold with a leading arrow qualifies as “clearly and conspicuously.”
For another data point, I know of one recently rejected developer who had the same information, but in another view that was linked from the purchase flow with a button. This did not qualify (according to one reviewer).

A Template

Include a customized version of the following “clearly and conspicuously” in your purchase flow:
A [purchase amount and period] purchase will be applied to your iTunes account [at the end of the trial or intro| on confirmation].
Subscriptions will automatically renew unless canceled within 24-hours before the end of the current period. You can cancel anytime with your iTunes account settings. Any unused portion of a free trial will be forfeited if you purchase a subscription.
For more information, see our [link to ToS] and [link to Privacy Policy].
Put it on the screen where you initiate the in-app purchase, below the fold might be OK, but you might want to put something to lead users there.
UPDATE: Readers are telling me it may also be required that you include it in your app store description. It’s a much easier change to include so I recommend you add it there to.

Why has Apple Taken a Legal Problem and made it Ours?

Apple shouldn’t be burying submission requirements in the bodies of contracts that nobody will read. If Apple wants developers to know something, they should put it in the App Store Guidelines, HIG, or developer documentation. The cost of making changes in a software project right at the end can be astronomical. Dropping a bomb like this on developers at submission shows a total lack of regard for our costs.
Why didn’t they just update the iOS in-app purchase sheet? I speculate that Apple discovered some legal exposure from in-app subscriptions and fixed it with lawyers instead of designers. This problem could be universally solved with an iOS update, but I think some side effect of Apple being a vast, lumbering bureaucracy made forcing 3.8(b) onto developers the more politically convenient path. Apple, if you are reading this, please either update the iOS sheet or move the requirements to the App Store guidelines, so fewer developers get caught unawares.
RevenueCat is the best way to implement subscriptions in your mobile app. We handle all the complicated parts so you can get back to building. Request an invite today at https://www.revenuecat.com/

Sunday, January 14, 2018

Beyond the Rhetoric of Algorithmic Solutionism


If you ever hear that implementing algorithmic decision-making tools to enable social services or other high stakes government decision-making will increase efficiency or reduce the cost to taxpayers, know that you’re being lied to. When implemented ethically, these systems cost more. And they should.
Whether we’re talking about judicial decision making (e.g., “risk assessment scoring”) or modeling who is at risk for homelessness, algorithmic systems don’t simply cost money to implement. They cost money to maintain. They cost money to audit. They cost money to evolve with the domain that they’re designed to serve. They cost money to train their users to use the data responsibly. Above all, they make visible the brutal pain points and root causes in existing systems that require an increase of services.
Otherwise, all that these systems are doing is helping divert taxpayer money from direct services, to lining the pockets of for-profit entities under the illusion of helping people. Worse, they’re helping usher in a diversion of liability because time and time again, those in powerful positions blame the algorithms.
This doesn’t mean that these tools can’t be used responsibly. They can. And they should. The insights that large-scale data analysis can offer is inspiring. The opportunity to help people by understanding the complex interplay of contextual information is invigorating. Any social scientist with a heart desperately wants to understand how to relieve inequality and create a more fair and equitable system. So of course there’s a desire to jump in and try to make sense of the data out there to make a difference in people’s lives. But to treat data analysis as a savior to a broken system is woefully naive.
Doing so obfuscates the financial incentives of those who are building these services, the deterministic rhetoric that they use to justify their implementation, the opacity that results from having non-technical actors try to understand technical jiu-jitsu, and the stark reality of how technology is used as a political bludgeoning tool. Even more frustratingly, what data analysis does well is open up opportunities for experimentation and deeper exploration. But in a zero-sum context, that means that the resources to do something about the information that is learned is siphoned off to the technology. And, worse, because the technology is supposed to save money, there is no budget for using that data to actually help people. Instead, technology becomes a mirage. Not because the technology is inherently bad, but because of how it is deployed and used.
READ THIS BOOK!
Next week, a new book that shows the true cost of these systems is being published. Virginia Eubanks’ book “Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor” is a deeply researched accounting of how algorithmic tools are integrated into services for welfare, homelessness, and child protection. Eubanks goes deep with the people and families who are targets of these systems, telling their stories and experiences in rich detail. Further, drawing on interviews with social services clients and service providers alongside the information provided by technology vendors and government officials, Eubanks offers a clear portrait of just how algorithmic systems actually play out on the ground, despite all of the hope that goes into their implementation.
Eubanks eschews the term “ethnography” because she argues that this book is immersive journalism, not ethnography. Yet, from my perspective as a scholar and a reader, this is the best ethnography I’ve read in years. “Automating Inequality” does exactly what a good ethnography should do — it offers a compelling account of the cultural logics surrounding a particular dynamic, and invites the reader to truly grok what’s at stake through the eyes of a diverse array of relevant people. Eubanks brings you into the world of technologically mediated social services and helps you see what this really looks like on the ground. She showcases the frustration and anxiety that these implementations produce; the ways in which both social services recipients and taxpayers are screwed by the false promises of these technologies. She makes visible the politics and the stakes, the costs and the hope. Above all, she brings the reader into the stark and troubling reality of what it really means to be poor in America today.
“Automating Inequality” is on par with Barbara Ehrenreich’s “Nickel and Dimed” or Matthew Desmond’s “Evicted.” It’s rigorously researched, phenomenally accessible, and utterly humbling. While there are a lot of important books that touch on the costs and consequences of technology through case studies and well-reasoned logic, this book is the first one that I’ve read that really pulls you into the world of algorithmic decision-making and inequality, like a good ethnography should.
I don’t know how Eubanks chose her title, but one of the subtle things about her choice is that she’s (unintentionally?) offering a fantastic backronym for AI. Rather than thinking of AI as “artificial intelligence,” Eubanks effectively builds the case for how we should think that AI often means “automating inequality” in practice.
This book should be mandatory for anyone who works in social services, government, or the technology sector because it forces you to really think about what algorithmic decision-making tools are doing to our public sector, and the costs that this has on the people that are supposedly being served. It’s also essential reading for taxpayers and voters who need to understand why technology is not the panacea that it’s often purported to be. Or rather, how capitalizing on the benefits of technology will require serious investment and a deep commitment to improving the quality of social services, rather than a tax cut.
Please please please read this book. It’s too important not to.
Data & Society will also be hosting Virginia Eubanks to talk about her book on January 17th at 4PM ET. She will be in conversation with Julia Angwin and Alondra Nelson. The event is sold out, but it will be livestreamed online. Please feel free to join us there!

Interested for our works and services?
Get more of our update !