I received an email commenting on my written and podcasted criticism of "ethical hacking". For the record, I've said on many occasions that Ethical hacking is the perceived high road of cracking, an organized and sanctioned practice of identifying vulnerabilities in software. In practice, "open community" ethical hacking is a train wreck, widely practiced outside these parameters, by people with ambiguous motives, using few if any formal methodologies and acceptance criteria.
The email comment begins as follows.
QUOTE: I would like to stress that discovering and writing exploits for certain types of flaws does require serious knowledge and skills, which 99.9% of the programmers do not possess. While humility is a good, the fact of the matter is that these people are part of a select group. Also, as a sidenote: a large percentage of programmers do not understand even the basics of the possible security risks which may affect their products. ENDQUOTE
MY RESPONSE: This is mostly true. There are any number of automated methods to ferret out exploits; heck, even proper regression testing can accomplish this. I'll note that this is a sad testimony to the current state of software quality assurance.
I don't know how to respond to your comment about exploit writers being members of a select group. Do you believe this entitles them to be treated differently, or to behave differently? Some people might suggest that I'm a member of a select group. First, I'd laugh, then I'd comment that select or not, my first obligation to my community is to be responsible, accountable and do no harm.
The email continues: QUOTE: from an economics point of view: software vendors have no financial reason to fix bugs ( by bugs I mean here problems which wouldn't come up in real life - ie. they wouldn't bother the customers - but under exceptional circumstances - like a specially crafted query or input file - they could lead to information disclosure / arbitrary code execution / etc). Fixed bugs don't sell products. New features sell products. In this sense "ethical hacking" and the threat of full disclosure plays a role of keeping the players (at least somewhat) honest.ENDQUOTE
MY RESPONSE? We'll have to agree to disagree. I believe that reliable software earns reputation for quality. Users are creatures of habit, especially unsophistocated users. A large percentage of users rely on a small percentage of features in nearly all the software they use. When these fail consistently, users become frustrated and migrate to a different product. We have ample evidence from the automobile industry to demonstrate that over time, people abandon brands that do not provide quality products. Look at the top 25 Fortune 100 companies in 1960 and count the number of US auto manufacturers. Check the NYSE today and count again.
The email now focuses on the matter of ethics: QUOTE: Where the ethics part comes in (in my opinion) is thinking about the customer. As I see it there are two extremes: - the "bad guys" discover the vulnerability and use it to take advantage of the users of the certain product without anybody knowing it - the vendor discovers it and patches the problem (hopefully before anybody else discovers it)
Of course (as with everything) there are many shades of gray in between (like customers not deploying the patch right away and the "bad guys" reverse engineering it to find the flaw it fixes and then exploit it on the customer base who didn't apply the patch), but I didn't want to complicate this description. ENDQUOTE
MY RESPONSE:First, I have to say that ethics should apply to everyone, equally. Painting all vendors as entirely evil is simply the wrong launch point. Understand that I'm not siding with vendors: they should act ethically as well.
Now, before you dismiss customer behavior as complicating the matter, I have to say that you can't, because this illustrates how broad the responsibility and accountability for maintaining secure systems and networks must extend. It also helps distinguish bad actors from ethical ones. Why limit full disclosure to vendors? Shouldn't full disclosure extend to making public notice of all the web sites that are still vulnerable to known exploits? Why not embarrass or threaten every site owner into applying patch management? Go a step further. Embarrass and threaten every operator who doesn't configure systems and networks according to best security practices.
The only place where I believe there are shades of grey here is "where you draw the line" - and that's as religious an argument as pro life versus pro choice.
Now the email focuses on the "ethical hacker" QUOTE: The "ethical hacker" approach falls somewhere in the middle: after discovery let the vendor know and if it doesn't care (doesn't communicate with you, doesn't promise to release a fix within reasonable time-frame), release the vulnerability publicly, preferably with methods for potential customers to mitigate it. Why should it be released? Because as time passes, the probability that the "bad guys" find it increases! As an independent security researcher you don't have any other choice than to follow this path.
MY RESPONSE: You always have choices. Here's one, and I'm fine with the following scenario: - a security researcher discovers a vulnerability in software, contacts the vendor, and reports the vulnerability (possibly provides exploit code) - if the vendor is non-responsive, the researcher corroborates the vulnerability privately, with a trusted community/3rd party (e.g., a security association); together they identify a workaround - the trusted community/3rd party contacts the vendor - if the vendor is non-responsive, the security association publishes the vulnerability and workaround (like the CVE database)
My perceived "value add" here is that hopefully, the vendor will respond to an association that has is reputable, has no self-interest, and has earned credibility among the vendor, research and user community. A best effort has been made to "do no harm" to the user and vendor. Attribution of the vulnerability discovery can be provided by the security association.
The email then discusses the matter of ethical hackers who are not ethical
QUOTE: There are many bad apples in the "research community" who place personal pride before the interest of the customers, but they are not practicing ethical hacking!
MY RESPONSE: These folks are not practicing hacking. The "ethical" is misapplied. I can call a protocol Simple Network Management Protocol but that's a joke - the p rotocol's not simple at all. Labeling it "simple" is a case of subliminal advertising. Calling onself an ethical hacker is like calling onself an ethical Christian, Jew or Muslim. By definition, all these *must* be ethical or they are not true to their faith or profession.
In my rants about ethical hacking, I portray ethical hackers who are bad actors as opinionated and abusive. The email takes exception to this, as follows:
QUOTE: A big vendor can not disregard a serious vulnerability just because the style of the communication. Do you consider that just because I write "I'm the king of the world and you know s**** about software development" in an e-mail to MS in which I disclose a remotely exploitable flaw for Vista, they should disregard it? If the vulnerability is genuine and the vendor really doesn't communicate (doesn't even acknowledge the receiving of the mail) there is no other possibility than going public (again: preferably with a mitigation method for clients) - the alternative being to wait until the "bad guys" discover the vulnerability and the exploitation becomes widespread enough that the company is forced to do something about it.
MY RESPONSE: I believe that anyone who's ethical would not posture in the manner you suggest. Correspondence that slams a company, its product and its employees is an almost certain way to create an uncertainty with respect to trustworthiness - civilized people do not interact in this manner. If you had written to me saying, "you don't know s--t about ethical hacking" I would never have responded.
I do not deny that there are bad actors in software companies. I don't believe you are going to influence them over the long term by full disclosure in the manner you suggest. You can't force someone to behave ethically. You can only choose to behave so yourself and accept that sometimes you won't accomplish what you set out to do.
The emailer continues by discussing vendor behavior:
QUOTE: You can't rely on companies to try to make the most secure products. They will make the products which generates the most revenue. Cars didn't have safety belts until they were forced to. The same way software vendors won't place security first (or at least in the first 3 positions) until they will be forced to.
MY RESPONSE: IMO, a blanket condemnation is an unproductive and unhealthy starting point for any security researcher. There are bad actors and good in all communities. I'm not in a position to be judge and jury of any company because I find a vulnerability and am ignored.
You raise a different point when you talk about cars and seat belts. Mandatory auto safety measures take years to implement. Internet security measures will also take years to implement. Activism and litigation influenced the time frame, but ultimately, collective efforts were required to make them mandatory.
I think the emailer made thoughtful arguments, but not convincing ones. I look forward to continuing the exchange.