return to ICG Spaces home    ICG Risk Blog    discussions    newsletters    login    

ICG Risk Blog - [ Vast differences in major flaw handling separate software and manufacturing firms ]

Vast differences in major flaw handling separate software and manufacturing firms

  #

How different the handling of analysis and subsequent disclosure of security flaws between software and computer makers on the one hand and hardware and industrial vendors on the other. Whereas the software industry too often seeks to muzzle "amateur and professional researchers who have found flaws in their products" to the point of imprisonment via the Digital Millennium Copyright Act (DMCA), hardware vendors tend to work with those investigators who discover faults, if not outright halt production before their customers halt deliveries.

I am equally disturbed that "all the special-interest organizations created by vendors for vendors" such as Microsoft's Organization for Internet Safety are designed to shield said vendors from public censure. I have less patience from governmental entities, especially those from the intel community, that should know that that criminal and terrorist groups are working just as quickly to build a repository of hacks and will employ them when it is financially rewarding or when DDOS or other strike is ordered.

As a heavy software user, what I am about to profess could indeed put me at risk, but it must be said:

  1. Software vendors really have no incentive to repair, possibly even rearchitect, their products so as to quickly resolve these flaws without public disclosure and censure in the marketplace. New products are the tip of a legacy iceberg, but one can start on high traffic avenues as IE.
  2. Software vendors have exported their abrogation of their design responsibility and its subsequent financial and disruptive impacts to users in the public and private sector. It is long past due, for government to begin to hold them accountable in spite the end-use agreements that absolve vendors.

I am well aware of the difficult of reengineering security in after the fact be in software, industrial or corporate process:

The public disclosure of software vulnerabilities originally gained momentum in the early 1990s, because operating system and application makers did not always respond to people who found security holes in their products. By telling the public about the security problems, the researchers ensured that software makers couldn't ignore the issue.

Life is not fair in having the market shift under the vendors' feet, but it has, yet vendors are stalling and government seems too comissive in allowing it. Consider what a manufacturing firm does when a serious fault is found in its product: Interim damage control, generally called containment, i.e., the action which immediately stops the symptom. The usual process is:

  1. Quarantine potential customer dissatisfiers at all points in process.
  2. Implement special 100% inspection or test.
  3. Identify contained shipments.
  4. Confirm effectiveness at customer.

Catastrophic for software vendors, as a flaw will be in all copies, so they attempt enforcing "responsible disclosure" which is little more than a controlled gag order. Their argument is self-serving: Researchers should "delay the announcement of security holes so that manufacturers have time to patch them. That way, people who use flawed products are protected from attack" as if that one researcher is the only one who has found, or will find, the flaw:

We don't feel that we are finding things that are unknown to everyone else. I am not special because I can run a debugger. Others can find--and use--these flaws.

The result has only been longer intervals between discovery and patch, a false sense of security among end-users, and an undeserved security reputation for the vendors.

Nothing new, as a mid-19th century book on locks noted:

Rogues knew a good deal about lock-picking long before locksmiths discussed it among themselves, as they have lately done. If a lock [is] not so inviolable as it has hitherto been deemed to be, surely it is to the interest of honest persons to know this fact, because the dishonest are tolerably certain to apply the knowledge practically; and the spread of the knowledge is necessary to give fair play to those who might suffer by ignorance. It cannot be too earnestly urged that an acquaintance with real facts will, in the end, be better for all parties.

Give the cracking of the anti-theft Digital Signature Transponder (DST) car key a read. Good design guidance as well as the extension past ignition keys to highway toll payment transponders, physical security access systems, and inventory systems. Without robust encryption, other RFID systems are vulnerable.

Flaw finders go their own way
By Robert Lemos
CNET News.com
January 26, 2005

Graduate Cryptographers Unlock Code of 'Thiefproof' Car Key
By JOHN SCHWARTZ
New York Times
January 29, 2005

Security Analysis of a Cryptographically-Enabled RFID Device
Steve Bono, Matthew Green, Adam Stubblefield, Ari Juels, Avi Rubin, Michael Szydlo
Johns Hopkins University Information Security Institute, RSA Laboratories
[Draft, 28 Jan 2005]

Gordon Housworth



Cybersecurity Public  Infrastructure Defense Public  

discussion

  discuss this article


<<  |  July 2019  |  >>
SunMonTueWedThuFriSat
30123456
78910111213
14151617181920
21222324252627
28293031123
45678910
view our rss feed