No judgement, just curious.
I’ve thought of leaving for years. Problem is: I have a wife who doesn’t work, two kids, piles of debt, and live in the most expensive city in the country. We can’t just leave, our whole life is here. And moving to a bank now to make more money (if I even would, the tenure and promotions do pile up) means working harder and more hours. That comes out of spending time with my young kids.
I spent a couple of years having a tough time with this. It genuinely caused a long-term, slow burn existential crisis that only recently started to settle into a stable state. All of life is moral compromise, I think. It sucks and I’m sorry, but it would be too hard to stop and I’m just sort of accepting that now.
The framing of your question has no good answer like the classic “are you still beating your wife?” where either a yes or no is still a bad look.
I was much warier of trusting Google with my data pre-Google to post-Google. It's constantly drilled into you the sacred nature of user data, and the technical protections seemed sound.
The elements I was most uncomfortable with were the profit-shifting/tax optimization, i.e. the Double-Irish Dutch Sandwich.
For privacy perspective, I'm not very satisfied with the status quo, but at least for Ads, privacy is now the top priority with a strict deadline so the situation may get improved. But I still think collecting less (or no) user data solves just a part of the problem; users still don't have understanding on what the real trade-off between their privacy and benefits to themselves/the overall web ecosystem from the ads they're watching is. Without this information, users cannot really make informed arguments and decisions on their privacy and ads. IMO, this is one of the fundamental reason of having significant discrepancies on the ads' perception across many people. But as this is not just a technical problem like 3rd party cookie but more of a subjective issue, this might be a much harder problem to tackle though.
I think some of the backlash is justified and pushes the company to do better, but there's also some amount of what feels like backlash for backlash's sake where it just gets everyone here paying less attention to what critics think.
(speaking only for myself)
A company that’s as large and fundamental as Google will always have detractors. They pay too much. They pay too little. They collect too much data. They don’t make their data open enough. There will always be something.
https://www.economist.com/podcasts/2020/04/16/has-covid-19-k...
I don't consider the company evil. In fact, I think it's leading the way that we talk about ethics in computing. People talk about Maven, but it was an internal revolt--not external influence--that challenged the issues with the project.
Secondly, there's a lot of criticism about supposed violations of "Don't Be Evil". As a multi-national company, it's almost impossible these days to avoid morally complex issues. Microsoft, on the other hand, has recently been seeing in much better light due to Nadella's leadership in the post Ballmer-era. But Microsoft never had the "Don't Be Evil" and isn't getting criticism for its, e.g. defense contracts. I don't consider Microsoft to be an immoral company either.
Concerns about privacy and data handling are warranted. There should be stronger consumer protections in there area, and legislation like GDPR is good, and should be more widespread. I honestly don't think Google is the company to worry about here (or at least the biggest threat). It's companies who aren't as closely watched that are the highest risk to you. Google is pushing the state-of-the-art here, with efforts like differential privacy. From what I can see, Google respects your PII and has more advanced internal mechanisms for handling it than any other company I know about.
IMHO, Google tried to be as transparent as possible in this area. Concerned about data collection? Review your activity on myactivity.google.com. Want to move all of your data out, or back it up? takeout.google.com Location data? Google literally regularly emails you a report on your location summary so you know what's going on.
Is it perfect? No. Is there valid criticism? Sure. Could it do a lot better in many different areas? Absolutely.
Is it evil? I honestly just don't see it. Feel free to ask about individual issues if you disagree.
The issue is what does qualify as "evil"?
There are a lot employees there that want to do the right thing, which is a why activism has been such a big deal for so long.
The tax avoidance thing used to annoy me but that's nothing compared to everything else that has happened. Even Vic Gundotra's incompetent management of Google+ looks tame now.
Maven was handled really poorly by Diane Greene who seemed to live in her own bubble during the whole thing. Urs Hölzle didn't fare much better in this regard. Moving all of SRE under cloud was a big mistake and it has backfired spectacularly because there's enough in SRE who are not happy with the whole "We want to work with the military" thing.
The shift towards cloud was a huge change and a lot of us would have preferred if Cloud had become its own company.
The high profile sexual harassment cases that we learned about did not do much to improve trust in Google execs. And the fact that the founders of the company just vanished into thin air speaks volumes about how bad things were.
Kent Walker in particular represents everything that is wrong with Google. He protected David Drummond, he's been pushing for more work with the US military and is as morally bankrupt as it gets. At one TGIF he tried to justify forced arbitration because it was better for employees to not have to go to court and when asked why not let them choose what they wanted to do he just walked off the stage. That's Kent Walker for you.
Heather Adkins is not much better than Kent. We all thought she could be trusted until two things happened: She actively sought out to destroy every copy of the Dragonfly investigation document that delroth@ had put together using searchable information and then claimed that the privacy review had proceeded as usual, which we all know it's not true given that Yonatan Zunger eventually left because of DragonFly.
Another interesting character is Laszlo Bock which many people used to look up to. When Eric Schmidt was caught colluding with other companies all he had to say was: "We do not believe we did anything wrong." That kind of moral compass, or lack thereof, is what defines Google's DNA.
Rachel Whetstone used to be loved here as well, for some mysterious reasons given her political background.
Google execs have managed to alienate many high value employees who either left on their own accord or were retaliated against until they had no choice but to go: Erica Joy, Liz Fong-Jones, Laura Nolan, Kelly Ellis, Claire Stapleton, Chelsey Glasson, Meredith Whittaker, Laurence Berland, Rebecca Rivers and so on and so forth.
This company could have been a profitable version of Xerox PARC. Instead it became SV's version of Monsanto.
It is a great place to work at. You will meet lost of ridiculously smart people and learn a lot. So put Google on your resume, learn a lot, and then run far, run fast.
That being said, I do think big tech has been scapegoated for years. Everything is blamed on these companies: eroding privacy, Brexit, Trump, fake news, social breakdown -- at some point we as a society need to take responsibility as well because frankly these platforms are a reflection of us as a species.
I understand the skepticism, but after seeing how things work on the inside I trust my employer far more.
You show me any sufficiently big company and I will show you enough reasons to call them evil.
When I joined Google I was very influenced by the 'Google is the good guy PR'. I never liked the Ads business model, but there a few Google products I really liked (Inbox, Maps and YT notably).
It's hard to pinpoint the wake up call, but I'd say it was project Maven (aka let's make an AI to help US drones kill more people), and all the lying and mislabeling there was around it (our AI don't kill people, etc…).
Wrt Cambridge Analytica, you have to go back to the mind state of 2016. At the time FB was mostly attacked because it was a walled garden hoarding all the precious data. So FB allowed users to share data with external apps. Then one apps managae to get data about 100k Americans and their friends and use it for helping Trump. And then 'data portability' was evil.
The general backlash against Google is the obvious one that Google collects a ton of data about you. Google is an ad company, and it makes as much money as it does BECAUSE it can put the ads in front of those users where it will be most effective. It can only do that because it knows a lot about users. It's google's core business. The trade-off makes a lot of sense to me: users get to use a really good product for free (It is hard to impossible to compete with google search), and in return Google gets to serve them targeted ads that it sells to other businesses. That's a fairly obvious and IMO very good deal, and it's easy to see how this happened historically, and it's likely not going to change -- people are too used to getting good search for free, you simply can't charge money for a search engine these days. I think the issue is that users aren't always conscious of this transaction taking place (you get free search-results and emails, and google gets to use the data it collects when you use the services). But the good news is that as far as I can tell, Google takes privacy concerns very, very serious. It's mind boggling (but technically obvious) how much Google _could_ know about you, since it has your location data, search results, and theoretically even your emails (as far as I know, gmail data is fairly sacred and not exploited to better sell you ads). The mean reason I think google is still a good company (and doesn't deserve the backslash) is that, simply, Google doesn't exploit all the data it could collect about people for nefarious purposes, AFAICT. Now granted, I'm working in a role that is far away from those decisions, but it seems to me that a whole, Google tries to be nice and fair, and the protocols and technical solutions appear sensible and carefully designed to put user's privacy first. Given the position google is in, I think it is trying very hard to do the right thing and protect its users rights and data, and does not exploit them in any "evil" way, as far as I can tell.
Like I said, I think this is a decision that makes economical sense: even as it is Google faces enough scrutiny, it's in their interests to not exploit all the data it could collect. I'm sure all the scandals that Facebook has hurt its bottom line. At Google, I trust leadership to not fuck this up.
First, I believe in our basic mission to make the world more open and connected, and I think in many ways we do work toward that. We're especially seeing that now. Tons of people are using FB (including IG and WA) as a way to stay connected during this crisis, and it makes me proud.
On privacy: yeah, we've made some mistakes. Were they evil mistakes? Let's put that in context. I was around when "information wants to be free" was everyone's mantra. When Facebook was being criticized for not making information they had available to third parties. I do not accept the gaslighting about attitudes toward privacy always being like they are now. They weren't. Should FB have put in better controls, and more strongly enforced those controls? Almost certainly. Was the lack of such efforts "evil"? Only if you apply today's standards of diligence to events and decisions in a very different time. Right now, I can see some of the problems that occur as we apply rigorous access control to user data even as it moves between internal systems. How many of our critics have ever needed to deal with such issues? There will always be more to do, but I'd say we're a bit ahead of the industry in that area now.
On disinformation: this is the real "damned if you do, damned if you don't" scenario. There's no logically-consistent reason why Facebook should exercise more control over content than Verizon does over the content of phone calls. To do so is to invite accusations of censorship, and run the risk of being treated as a publisher rather than a carrier. Nonetheless, we're devoting more human and computer time to detecting disinformation than most companies have in total. Vast data centers' worth. I'm in storage, so I see only the edge of this, in the form of (insanely complex) analysis pipelines and models. That's enough to get a feel for the scale of these efforts, and it boggles my mind. People are honing these techniques all the time, and having some successes. These successes are drastically under-reported compared to failures, for reasons I won't get into here, but they are reported. Anybody who's actually paying attention can see thousands of accounts being banned at a time for inauthentic content. Think for a moment about the computational complexity of detecting a thousand-node subset within a billion-node graph. Most people who accuse FB of taking this issue lightly just don't grasp the scale at which we operate and the additional difficulty that entails.
Does FB still do things that make me cringe? Sure does. The disrespect for user choice (e.g. chronological-timeline settings constantly being reset) pisses me off to no end. The demands put on moderators of large groups, and the dearth of tools made available to them, is inexcusable. Likewise for the lack of decent support or appeal processes when users are adversely affected by our own screwups. I've come to the conclusion that even though the people I work with directly are great, there are some other people at the other end of the company (user-facing product rather than deep infra) and at a different level who ... well, let's just say I wouldn't get along with them so well. It does give me pause sometimes, but not enough to outweigh my belief in the basic mission.
You want evil companies? How about those who make much of their money from contracts with ICE/CBP/NSA? How about those that have helped hollow out the economy with their anti-labor "gig economy" BS, leading to millions of unemployed right now? Or the worse half of the finance or defense industries? How about those helping to destroy our health or our planet? There are absolutely better companies I could work for. My ideal would be to do what I do at a company that's making vaccines or something, but they didn't seem interested in hiring me. But there are other companies that were interested and I turned them down because I don't approve of what they do. There are a whole lot of pots calling the kettle black.
So are you seriously asking people who work in tech companies that spread cat videos and enable video chats if their companies are "evil"? With a straight face? Come on.
This thinking betrays both that people in the industry take themselves far too seriously, and that we're so brainwashed to accept these kind of glaring double standards, we don't even stop for a second to question them.