An ethical hacker is a computer and networking expert who systematically attempts to penetrate a computer system or network on behalf of its owners for the purpose of finding security vulnerabilities that a malicious hacker could potentially exploit.
Ethical hackers use the same methods and techniques to test and bypass a system's defenses as their less-principled counterparts, but rather than taking advantage of any vulnerabilities found, they document them and provide actionable advice on how to fix them so the organization can improve its overall security.The purpose of ethical hacking is to evaluate the security of a network or system's infrastructure. It entails finding and attempting to exploit any vulnerabilities to determine whether unauthorized access or other malicious activities are possible. Vulnerabilities tend to be found in poor or improper system configuration, known and unknown hardware or software flaws, and operational weaknesses in process or technical countermeasures. One of the first examples of ethical hacking occurred in the 1970s, when the United States government used groups of experts called "red teams" to hack its own computer systems. It has become a sizable sub-industry within the information security market and has expanded to also cover the physical and human elements of an organization's defenses. A successful test doesn't necessarily mean a network or system is 100% secure, but it should be able to withstand automated attacks and unskilled hackers.
Any organization that has a network connected to the Internet or provides an online service should consider subjecting it to a penetration test. Various standards such as the Payment Card Industry Data Security Standard require companies to conduct penetration testing from both an internal and external perspective on an annual basis and after any significant change in the infrastructure or applications. Many large companies, such as IBM, maintain employee teams of ethical hackers, while there are plenty of firms that offer ethical hacking as a service. Trustwave Holdings, Inc., has an Ethical Hacking Lab for attempting to exploit vulnerabilities that may be present in ATMs, point-of-sale devices and surveillance systems. There are various organizations that provide standards and certifications for consultants that conduct penetration testing including:
Ethical hacking is a proactive form of information security and is also known as penetration testing, intrusion testing and red teaming. An ethical hacker is sometimes called a legal or white hat hacker and its counterpart a black hat, a term that comes from old Western movies, where the "good guy" wore a white hat and the "bad guy" wore a black hat. The term "ethical hacker" is frowned upon by some security professionals who see it has a contradiction in terms and prefer the name "penetration tester."Before commissioning an organization or individual, it is considered a best practice to read their service-level and code of conduct agreements covering how testing will be carried out, and how the results will be handled, as they are likely to contain sensitive information about how the system tested. There have been instances of "ethical hackers" reporting vulnerabilities they have found while testing systems without the owner's express permission. Even the LulzSec black hat hacker group has claimed its motivations include drawing attention to computer security flaws and holes. This type of hacking is a criminal offence in most countries, even if the purported intentions were to improve system security. For hacking to be deemed ethical, the hacker must have the express permission from the owner to probe their network and attempt to identify potential security risks.
Does ethical hacking do more harm then good?
Within minutes of its release, SQL Slammer caused Internet connectivity to drop 15 percent globally and by as much as 60 percent in some locations. It was particularly nasty for large corporations with multiple intranet connections to partners and suppliers. Latency tripled, and many applications timed out. More than 13,000 Bank of America ATMs went down for several hours.
And yet, like hundreds of worms and viruses before it, Slammer need not have happened at all.While media reports largely focused on the technical reasons why organizations were vulnerable to Slammer, few have discussed the root problem: the person who released the worm. This person violated a fundamental ethical rule--Kant's Categorical Imperative, which cautions us not to behave in a manner that we wouldn't want everyone else to behave. If we all behaved in such a manner, the Internet would be unusable. Indeed, it wouldn't exist.
But part of the responsibility also rests with the person who made a conscious decision to move the vulnerability from "known to few, and not a problem" to an attack that crippled the Internet. The worm was based on code written and published by David Litchfield of NGSSoftware. Litchfield eagerly shared his knowledge and his work. The result of his labors made it easy for someone without his knowledge and skill to exploit the vulnerability in disastrous ways. The public release of vulnerability information--regardless of whether it has a corresponding fix--is often performed by self-styled "security researchers" for small, obscure firms. Large, prestigious firms never do it. These small firms have concluded that they will gain more business from the recognition than they will lose from the notoriety. To the extent that we contribute to that belief, we share part of the responsibility.
Those who publish vulnerabilities claim they do so in the name of security. They insist that vendors, Microsoft in particular, wouldn't otherwise be motivated to produce quality code or fix vulnerabilities. They claim that they are bound by professional ethics to do so because professionals share their knowledge.
This problem isn't new. Most professions have to cope with how to share information with the good guys while not leaking it to the bad guys. Most have come down in the same place: the professional shares his knowledge, skills and abilities with his principals and his peers. Not only is he not obligated to share with others, but in most cases he is ethically prohibited from doing so.Information security is no different in this sense from other professions, yet the "open disclosure" debate rages on.
After Slammer hit, Litchfield reportedly regretted publicizing the vulnerability. "We often forget that our actions online can have very real consequences in real life--the next big worm could take out enough critical machines that people are killed," he wrote. "I don't want to feel that I've contributed to that." Later reports suggested that he changed his mind, prompted in part "by the hundreds of e-mails...encouraging [him] to keep publishing exploits."
Most of us learn what we need to know about ethical behavior in the sandbox and kindergarten. From a very young age, we intuitively know the difference between right and wrong, and we behave well out of habit.
But neither intuition nor habit serves us well when it comes to knowing what's ethical in an environment like the Internet. Here we need analysis, analogy and history. Let us hope that it will take less than someone's death for Litchfield and others to understand and apply these lessons in support of the common good.
This Art on purpose has violated Creative Commons licensing is this ethical?

YOU ARE READING
Ethical Virus
SachbücherNon fiction Future ethical considerations The future of ethics