I worked on a WiFi geolocation app for a DoD contractor. When I realized its use was to track down and kill people, I was faced with a moral dilemma. I’ll share the technical details of the app that made it so interesting to me & discuss the ethics of building tools without understanding their use.
In 2011, with a team of interns at a Department of Defense contractor, I created a Wi-Fi geolocation app to locate hotspots. It could find the location in 3D space of every hotspot near you in seconds. We made formulas to model signal strength and probable distances. We used machine learning to optimize completion time and accuracy.
I was so caught up in the details that it took me months to see it would be used to kill people. What do we do when we discover that we’re building something immoral or unethical? How can we think through the uses of our software to avoid this problem entirely?
In 2011, with a team of interns at a Department of Defense contractor, I created a Wi-Fi geolocation app to locate WiFi hotspots. Moving in a straight line, it could find the distance from every WiFi hotspot near you in a few seconds. If you turned, it’d tell you which direction. Climb some stairs, and it would find the hotspot in 3D space. It would even work if the hotspot was moving. We designed formulas to model losses in signal strength and to calculate probable distances, tuned constants with machine learning, and optimized from minutes to seconds, 40 meters to 15 feet. It is one of the most technical projects I’ve worked on.
Then, I realized it’d be used to kill people based on where their phones were, and I quit. As technologists, we’re capable of teaching computers to do amazing things. As citizens and humans, we’re responsible for the results. It’s our duty to think through the applications of the code we build and to refuse to create things that we oppose morally.
The talk will be broken up into two parts: first, a technical dive into a really cool project and what my team needed to do to achieve what we did. Kalman Filters, probabilistic models, real-time mesh networking over BTLE, surveying Wi-Fi in promiscuous mode on mobile devices, and even a UAV. This will be fairly theoretical with high-level descriptions of these topics, how and why we used them. The section primarily serves to share the experience that I had: the technology was so interesting that I was distracted from what it would be used for.
The audience will come to the realization with me that the very cool project we’d been building would be used in very uncool ways. After that, the talk becomes a cautionary tale. We’ll discuss what it means to be morally or ethically opposed to something, how to think through potential uses and misuses of our software, and when and how to say “no”. This is the meat of the talk and I’ll focus time here more than on the technology.
The software industry doesn’t have unions to stand behind us when we say “no”. We don’t have ethics committees to help us determine if what we’re building is ethical and legal. We might lose our job when we try to explain the moral, ethical, or legal issues with that feature that tracks users. Are we in a position to find something new before we run out of rent and food money? Will we be “crying wolf” and ignored if we try again later for some other issue? How can we best think through the ramifications of what we code?
I was lucky: for better or for worse I realized my moral issue with the DoD project near the end of an internship. When I told my employer that I was not interested in continuing with them I already had a new job lined up. There were no hard feelings. That isn’t always the case. I want to share the cool stuff I was doing and how it blinded me to the consequences. I want you to think about them when you hear about a new feature so that you can make these calls for yourself.