Guest post from Sam Warner, edited by Anne Currie
In software, things get built quickly: “move fast and break things”. We see companies, teams and individuals proud of how fast they can make <shiny new thing> but as a community we shouldn’t let this be all that defines us. We need to be more three-dimensional. I find it hard to believe that all we are really concerned with is how quickly we can get features out the door. Don’t get me wrong - as a software consultant, I love providing as much business value to a customer as quickly as I can, but I’m just as proud, if not more, when I can provide software that has gone through considerate design and causes some social good.
Tech Ethics, now more than ever, is something everyone in the industry of software development needs to consider. We need to dive in. And when we do this, we need to be able to create resources and talk about what we discover, but that is far easier said than done. Of late, we’ve seen some technology take nasty turns - people have been the victims of hate-speech on microblogging platforms, their social media data exploited for monetary or political gain, and even fallen victim to misinformation in the form of so-called “fake news”. The users of our systems are growing apprehensive of software once again.
Yet I don’t believe every instance of technology that causes negative social impact is intentionally harmful or exploitative. Sometimes, I think that negligence and lack of awareness are two of the biggest contributors. For example, when Mark Zuckerberg first made Facebook in his university bedroom, I don’t think it was ever planned to be the dopamine-exploiting, data-crunching tool we see it as today. Is it possible that a lack of awareness in Tech Ethics led to Facebook being in a position where it could exploit its users?
We’re beginning to learn from our mistakes. We’re beginning to see fallacies around the benefits of unethical tech debunked. We’re building up case studies, tools and teachings with the goal of preventing us causing and experiencing these regrettable issues again.
Education and awareness is our best friend.
Sharing Your ResourcesUp until very recently, my go-to list of advice/resources when anyone asked “how do I become better at this tech ethics thing?” was the following:
- You’re already off to a great start by asking that question.
- Go and watch QCon London’s Tech Ethics in Action videos (they’re on InfoQ and all amazing!)
- Run a retrospective on your current project, but from an entirely ethical standpoint - step in to other people’s shoes and view your project from their eyes.
- Discuss the output and ideas that follow with colleagues and friends.
- Push for change, because even the smallest contributions build up to make a world of difference!
I say ‘until recently’, because as of about a week ago there is a new top tip to add to the collection.
“Check out the EthicalOS Toolkit!”
Ethical OSDeveloped by the Institute for the Future and the Omidyar Network, EthicalOS toolkit is one of my new favourite tools for teaching about and raising awareness of ethical issues in technology. Their website states their objective is to aid in “anticipating the future impact of today’s technology”, and I think it lives up to this lofty claim.
The toolkit is in actual fact a PDF, with three main tools:
- Tool 1: A set of fourteen detailed scenarios involving ‘future technology’ designed to start conversations and train us to identify problems on the horizon.
- Tool 2: A risk mitigation manual - a guide to eight different areas your users could fall victim to the hard-to-anticipate and unwelcome consequences of your software, and how to avoid them in the first place.
- Tool 3: A handful of strategies and business models to future-proof your development, with an ultimate goal of making healthy development platforms the industry norm!
Each area is further divided into bite-size sections and the guide can be picked up and put down really easily - you only need five minutes to go over some of these concepts for the first time, and it begins to place users in the mindset of a technology ethicist. This is before even beginning to look at the Risk Mitigation Form provided separately on EthicalOS’s website, which pulls the manual into an easy to fill-out checklist of consideration points and actionable tasks.
The guide cleverly balances people and software to produce a refreshing way for everyone to enhance their understanding in this area - from beginners dipping their toes in for the very first time, to thought leaders, there is something in there for everyone.
Their single-page website, and their linked resources, are written in a way that is comprehensive, yet understandable. What lies at the core is a very difficult topic, but the entire thing is presented so clearly that you are almost tricked into believing that it would be simple to write. Make no mistake, this is a new benchmark for learning resources.
The toolkit really highlights how many questions we should be asking.
Sci Fi?The scenarios posed to readers vary from the sort of technology you might have already seen or even used, to hyper-futuristic technology that isn’t yet on the horizon. For example, one of the scenes that I think relates heavily to technology right now is scenario 3. This particular setting explores “addiction and the dopamine economy” by detailing a world in which some social media companies decide voluntarily to enforce time limits, creating a social divide between people that spend their time locked-in online and those that are choose to spend their time offline. I have friends that already use apps to limit screen time - think Offtime/Moment/Breakfree - maybe this situation isn’t far from reality, and social media providers could implement this tomorrow. This lies in stark contrast to the more Sci Fi scenario 9, where “Read-and-write” neurotech implants are a reality and users have their thoughts and memories uploaded to the cloud for sharing and analysis, making us ask questions of how we might mitigate and prevent against a surveillance state.
Not only do the scenarios vary in terms of how far they are away from being a reality, they also vary by sector. Scenario 6 is based around the legal industry, asking us if we are ready for a world in which “predictive justice” tools become the preferred method for determining prison sentences, and what the problems with this might be. The impact delivery drones might have on the world is discussed in scenario 12, provoking discussion about the future of logistics from an inclusive and moral standpoint. Scenario 14 covers the automotive sector (very popular where I am based in the West Midlands), talking about a reality in which self-driving vehicles become vulnerable to a new type of realtime ransomware, and how deadly this issue could be.
Whatever your day job, interests, or concerns for future tech are, I think there are a few scenarios in here for everyone.
I myself have found this to be an absolutely invaluable tool in beginning to bring together colleagues and friends and starting to talk about the impact of what we do. The scenarios provide a fantastic foundation to kick start us in to action, and by providing a meaningful and specific vocabulary to readers, it facilitates further conversation and more importantly, terms to describe problems we might identify- saving us from explaining personal issues we’ve experienced in the past, or explaining discomfort as a ’gut feeling’.
I’ll add that a comment I have heard from peers about this toolkit is how useful it would be for any startup. I agree with this, but would stress to any readers sitting on the fence that this is not a kit solely for startups. There is something in there for everyone (convenient - that’s what I asked for in my intro). The resources hosted on EthicalOS are to help all product teams build responsibly.
When you are reading through it, the initial responsibility might be a little overwhelming. You might feel that your current processes are familiar and ‘good enough’ that you don’t want to change. You might not initially know exactly what to do with this deluge of information. But dedicating time to these resources and taking heed of their warnings will be worth it.
After reading through this article, maybe you’ll feel a little lost. Here’s a summary of my five top tips for making the most of the toolkit and checklist:
- Read through the toolkit. Don’t feel like you should blast through the whole thing in one go, as there is a whole lot of information in there! Instead, check through small sections frequently - I did this with my morning coffee.
- Go through the supplied checklist with a previous or current project in mind. Try and identify risk areas and begin to think of a plan to mitigate them.
- Run a workshop (internally, at a meetup group, or with friends) to go through one of the scenarios. For bonus points, choose a scenario based around some of the group’s previously identified risks.
- Ask if there is scope to change your ways of work. This can be for everything, from open-source projects to commercial work - for any software development processes you have, can you add regular risk reviews or ethical feasibility when considering new features?
- Pass it on! Let more people know that this toolkit exists.
I truly believe that any team that takes ideas from this list and the EthicalOS toolkit will be on to the start of something great. The smallest contributions to ethical technology and considerate design build up to create a global network of developers that truly care about the human implications what they make. By starting at home, and with small steps, we can begin to reprogram the values of software engineering.
We should all strive for the level of future-proofing that EthicalOS is pushing us towards. By doing this, we will standardise these processes and thoughts, and we might begin to ensure we don’t regret what we build.