“Don’t be evil.”
Immediately recognized as Google’s former motto and part of its Code of Conduct, this phrase has come under fire in recent days.
In one story, a former Google researcher decided to spin up her own research institute to investigate the ethical implications of artificial intelligence research, while in a second story, three fired employees have sued the tech giant for violating its own code of conduct.
Both stories underscore the tension between large tech companies and their products, which can make enormous strides towards improving communication, learning, and community-building, but can have darker ethical consequences, something many tech companies have been grappling with for some time.
In the first story reported by Wired, former Google employee Timnit Gebru claims she was fired for co-authoring publishing a research paper with her Google affiliation. The paper focused on the ethics of AI use in language modeling, which is part of the science behind speech recognition and AI-generated text. As Wired noted in December 2020, the paper did not “attack Google or its technology and seems unlikely to have hurt the company’s reputation.”
Google claimed that Gebru had resigned but later admitted it did fire Gebru’s fellow researcher Margaret Mitchell, who had worked with Gebru on a team investigating ethical approaches to artificial intelligence.
Now, Gebru has founded a new organization called Distributed Artificial Intelligence Research (DAIR). Although DAIR currently operates as a project under the nonprofit Code for Science and Society, an organization that fosters groups working on public interest technology, Gebru plans to set it up as an independent nonprofit in the near future. As she told Wired,
“Instead of fighting from the inside, I want to show a model for an independent institution with a different set of incentive structures.”
Meanwhile, NPR reports that software engineers Rebecca Rivers, Sophie Waldman and Paul Duke were fired from Google after organizing company workers to stand against projects they considered contrary to the “don’t be evil” clause in the company’s Code of Conduct. These projects included one that aided in identifying and apprehending suspected illegal immigrants for the federal government. In one instance, these employees had circulated a petition against what they perceived to be human rights abuses.
“Don’t be evil” was coined by employee Paul Buchheit (the person behind Gmail) and supported by fellow employee Amit Patel during a 2000 meeting to decide on company values. In an interview from 2007, Buchheit said that the motto was intended to be funny, “a bit of a jab at a lot of the other companies, especially our competitors, who at the time, in our opinion, were kind of exploiting the users to some extent.” He added that as he and Patel kept pushing for it, it took on “a life of its own,” until the company finally embraced it, adding it to the Google Code of Conduct, where it remained until the company removed it in 2018.
In the lawsuit filed in California, the three former Google workers claim Google retaliated against them for bringing attention to the company “doing evil.” And while it will be a while before we know the outcome of the legal action, the National Labor Relations Board’s preliminary findings suggest that the workers may have been unlawfully terminated.
Both of these stories highlight the ethical challenges tech companies face as they keep pushing the envelope, developing new technologies that inherently are neither good nor bad, but then have significant consequences out in the real world. It’s worth watching how the big tech companies react as the US and European governments call for accountability and possibly changes in how they operate in the future.
In other news
- AI research helps with mathematical theorems. Despite the controversy over AI research highlighted in our main story, the Independent reports that the Google-owned AI research company DeepMind has broken a new barrier in theoretical mathematics. To tackle theorems that have in the past relied on human intuition and creativity to solve, scientists successfully applied AI technology relationships in knot theory and combinatorial representation theory. This marks the first time that machine learning has proven up to the task of solving difficult theoretical problems.
- Square goes 3-D. Last week we talked about Jack Dorsey stepping down from Twitter. Just a few days later Wired reports that he announced his company Square has changed its name to Block (and now uses the domain block.xyz). Dorsey is reportedly undertaking new projects to do with the blockchain, said to be his latest obsession. Announced on Square’s 12th anniversary, the newly named Block now includes the personal payment service Cash App, the music service Tidal, and a new crypto-based open developer platform.
- U.K. orders Facebook to sell Giphy. Due to Facebook’s potential for anticompetitive activity, the U.K.’s competition and markets authority has ordered Facebook to sell recent acquisition, Giphy. Giphy provides GIFs, or short animations, that are valuable to social platforms because they increase engagement. Giphy is the market leader for GIFs, and a company estimated to be worth $400 million. The regulators described how Facebook’s owners Meta could shut down Giphy’s ad business to no longer compete with Facebook’s ad business. Once Facebook owned Giphy, regulators were also concerned about increased market power and if Facebook might limit other platforms’ access to Giphy’s GIFs or change the terms of access.
- MyCryptoWallet Collapses and Leaves Bitcoin Traders Stranded: Australian bitcoin trader MyCryptoWallet is under investigation according to Australia’s ABC News. Liquidators are looking closely at the trading platform, established in 2017, that allowed people to buy and sell all types of cryptocurrency including major coins Bitcoin, Ethereum, XRP, and Litecoin. 20,000 customers will now have to join the creditors’ queue and see how much of a return they receive on an estimated $21 million — which is not likely to be very much.
- Robot with real mannerisms. A team of developers at Engineered Arts, a robotics company based in Cornwall, U.K. have become the world’s leaders in AI humanoid robotics. The team at Unboxed discovered how the initial company vision — to create simple events robots to entertain museum visitors — led to a full-blown robotic innovation. The team’s talent for making robots more interesting led to the creation of Ameca, now considered the world’s most advanced AI humanoid robot. Ameca is so lifelike it looks as if it is responding to you with non-verbal cues. The Engineered Arts team says their robot is on the way to being a fully developed communication system.
Watch a short video of Ameca:
Tip of the week
This week’s tip is for everyone looking to land a new remote job. In a post on LinkedIn, Ivy Barley, a Program Manager at Microsoft, advises updating your resume for each job opportunity you pursue. To make these resume edits effectively, it’s essential to think like an algorithm.
Algorithmic thinking requires defining the problem, then determining the best solution. Before applying for a remote opportunity, take each job responsibility listed (the problems), and adjust your resume to highlight your experience in that area (the solutions). Like a Google search, the company’s job listing is the query. Your resume should serve as the perfect results page.
[NEWS] Former employees put Google’s ethics under a microscope .