- To repair, improve, return to service, heal, remunerate, compensate, or prevent from recurring. “She fixed the problem”, “the doctor fixed my bad knee”, or “we’ll fix it”.
- To immobilize, assign, stick, ascribe, prevent from moving, or render unchangeable. “Fix your eyes on this”, “it’s a fix”, or “the anchor fixes the boat in place”.
When a problem comes along, four fundamental questions travel with it: why it happened, how we can fix it, how we can prevent it, and who’s to blame for it.
They’re all relevant. They’re all important. But if your aim is to fix the problem, do your best to let go of whose fault it was. Spend your energy instead on understanding what caused it and how to repair the damage.
Sure, problems are often caused by a person’s mistakes or poor decisions. That definitely belongs in your root-cause analysis, and you should make sure that the same thing doesn’t happen again. But once you’ve figured out why the problem happened and how to fix it and prevent it in future, it’s just not that useful to focus on whose fault it was. Fixing the blame—assigning it to a person—takes time away from other tasks and builds resentment, and you don’t get much value in return.
When people fix blame, what are they achieving? Okay, you’ve figured out whose fault the problem was… so what? How does that help you?
It certainly doesn’t make the current problem go away: if you break a bone, the doctor can’t fix it by saying “you should have been more careful!”. If your kid gets a bad grade, you can’t improve it by saying “I told you over and over again to study harder!”. Often there are action items in there—cautious behavior, better study skills—that are useful in making a prevention plan for the future, but the blaming part doesn’t add much value.
Yet we do this all the time. How often have you heard people yelling, when there’s a problem, about who should have done something differently? Shouting that someone else did something dumb? Arguing about responsibility and blame as if those things will somehow change the situation?
We rely on blame, thinking that it will motivate others not to make the same mistakes again, thinking that if we can shame people hard enough, they’ll stop screwing up. Sometimes, it’s even true: blame does motivate people to change their behavior. But it’s a poor tool even in the best craftsmen’s hands, and there are better ones available.
Accountability is important. I’m not asking you to pretend that people don’t contribute to problems, nor am I suggesting that we shouldn’t hold them accountable. I’m saying that blame—the hot, emotional, shame-filled Red Hat concept—isn’t very useful. Determine the causes, address them directly, and keep moving.
Fix the problem, not the blame.
Many thanks to Karen Butler Easter, who taught me both this concept and this framing in my early days as a manager.
The spreading of false information during the election cycle was so bad that President Barack Obama called Facebook a "dust cloud of nonsense."
And Business Insider's Alyson Shontell called Facebook CEO Mark Zuckerberg's reaction to this criticism "tone-deaf." His public stance is that fake news is such a small percentage of the stuff shared on Facebook that it couldn't have had an impact. This even while Facebook has officially vowed to do better and insisted that ferreting out the real news from the lies is a difficult technical problem.
Just how hard of a problem is it for an algorithm to determine real news from lies?
Not that hard.
During a hackathon at Princeton University, four college students created one in the form of a Chrome browser extension in just 36 hours. They named their project "FiB: Stop living a lie."
The students are Nabanita De, a second-year master's student in computer science student at the University of Massachusetts at Amherst; Anant Goel, a freshman at Purdue University; Mark Craft, a sophomore at the University of Illinois at Urbana-Champaign; and Qinglin Chen, a sophomore also at the University of Illinois at Urbana-Champaign.
Their News Feed authenticity checker works like this, De tells us:
"It classifies every post, be it pictures (Twitter snapshots), adult content pictures, fake links, malware links, fake news links as verified or non-verified using artificial intelligence.
"For links, we take into account the website's reputation, also query it against malware and phishing websites database and also take the content, search it on Google/Bing, retrieve searches with high confidence and summarize that link and show to the user. For pictures like Twitter snapshots, we convert the image to text, use the usernames mentioned in the tweet, to get all tweets of the user and check if current tweet was ever posted by the user."
The browser plug-in then adds a little tag in the corner that says whether the story is verified.
For instance, it discovered that this news story promising that pot cures cancer was fake, so it noted that the story was "not verified."
But this news story about the Simpsons being bummed that the show predicted the election results? That was real and was tagged "verified."
The students have released their extension as an open-source project, so any developer with the know-how can install it and tweak it.
A Chrome plug-in that labels fake news obviously isn't the total solution for Facebook to police itself. Ideally, Facebook will remove fake stuff completely, not just add a tiny, easy-to-miss tag that requires a browser extension.
But the students show that algorithms can be built to determine within reasonable certainty which news is true and which isn't and that something can be done to put that information in front of readers as they consider clicking.
Facebook, by the way, was one of the companies sponsoring this hackathon event.
Word is that many Facebook employees are so upset about this situation that a group of renegade employees inside the company is taking it upon themselves to figure out how to fix this issue, BuzzFeed reports. Maybe FiB will give them a head start.