return to ICG Spaces home    ICG Risk Blog    discussions    newsletters    login    

ICG Risk Blog - [ Cybersecurity Public ]

Substitute terrorist, even criminal, for file-sharing pirate: BitTorrent & eDonkey lead the way for a COTS many-to-many C2 system


Attentive readers will remember in the aftermath of 11 September that Islamic terrorists boasted that they had established websites "to "make the Internet our tool," and although service providers frequently shut them down they usually reappear someplace else."" There was steganographic effort to digitally hide clear and cyphertexts inside images, often in obscure IRC chatrooms. Authorities were challenged to find, isolate, and decrypt, requiring vast bandwidth and supercomputer arrays. This bears a canny similarity to present efforts of the Recording Industry Association of America (RIAA) and Motion Picture Association of America (MPAA), to halt individual file swapping pirates.

I predict that mischief makers will adopt these emerging file sharing tools to create means of communicating to and among the faithful in ways that will challenge conventional traffic analysis and the newly emerging link analysis, while slowing the identification and termination of illegal sites.

eDonkey differs in two essential ways from earlier file-swapping services: decentralized search and independent distribution of file fragments:

  • Decentralized search: When a file is shared on the network, the technology gives the file a "hash" identifier--essentially an address based on the characteristics of the file itself. Each computer logged onto the network has a certain range of addresses assigned to it, so it can act as an index. This allows searches to be carried out more efficiently than in earlier decentralized systems. [A query] would be directed quickly to the computer that is temporarily responsible for keeping track of the location of files in that category, and a response would be returned more quickly.
  • Independent file fragment distribution: the system can break up each file into tiny pieces, allowing them to be distributed . As soon as one person starts downloading these pieces, he or she starts offering them to the network at large. That means a movie does not have to be downloaded in its entirety before it can be offered to other people, making distribution of these and other larger files much more efficient.

BitTorrent is optimized for distribution and transfer speeds of large files over search:

Users intending to distribute files] set up a "tracker" Web site [essentially] a low-level server that keeps track of requests for a given file and directs the requests to the users offering the file. These users will have posted links to the tracker on a Web site, and these links will trigger the properly formatted BitTorrent downloads. Once someone has started downloading a file, that person's computer immediately serves as an upload server for anyone else looking for the file. The technology automatically balances upload and download speeds, ensuring that people downloading give back to the network, Cohen said. Unlike other file-swapping networks, if the number of people searching for a single file increases, it means faster downloads--not traffic jams--as the individual pieces get spread quickly around the community.

Unlike earlier file-sharing programs, the more users swapping data on BitTorrent, the quicker it flows, which was its original White Hat app (reducing server clogging when distributing large files). It was immediately diverted to Black Hat piracy ends. BitTorrent gains resistance to spoofing countermeasures used to sabotage file-sharing (such as uploading decoy or incomplete files) as it doesn't seek entire files, but "torrent" or seed files hosted by many sites:

The files on the Web sites are not songs or movies but serve as markers that point the way to other users sharing a given file. BitTorrent then assembles complete files from multiple chunks of data obtained from everyone who is sharing the file. Attempts to upload bogus files to corrupt the process fail because the BitTorrent program follows a blueprint of the original file when piecing it together.

While some BitTorrent seed hosting sites have been forced to close, others get under the radar as they are not hosting known copyrighted materials and so do not have as identifiable signature. BitTorrent central servers have already come under DDoS attack from unknown sources. That is prompting an "overhaul of the BitTorrent protocol itself, as right now there lies too much reliance on the trackers, [thereby reducing] the requirement of the tracker to an initial connection, therefore moving the actual peer-sharing burden to the peers themselves."

It's only a matter of time before terrorists and criminals began to harvest these new P2P tools.

'BitTorrent' Gives Hollywood a Headache
December 11, 2004

Free underground--an immovable force?
By John Borland
CNET May 30, 2003

Downloads rise as file traders seek new venues
By Dawn Kawamoto
CNET April 26, 2004

Gordon Housworth

Cybersecurity Public  InfoT Public  Strategic Risk Public  Terrorism Public  


  discuss this article

Google Desktop Search (GDS) merely makes easy what was once a forensic exercise


I submit that the concern over 'excesses' of Google Desktop Search (GDS) is really a flag for what had long been available to both legitimate or illegal forensic sweeps of your system(s). The departure of GDS is that it now makes retrieving those items an effortless exercise for the ordinary fellow, or allows a thief to find items in a time window that reduces their exposure while maximizing their gain.

Google Desktop Search (GDS) indexes and finds documents that users may not wish to be found, such as browser cache with visited Web pages, online banking and purchase transactions, personal messages sent from Web e-mail programs and password-protected personal Web pages. On a PC with multiple users, GDS will search documents and web pages for all users. GDS searches the Windows cache, which can bypass some encryption programs entirely by finding cleartext temp files. That, in combination with the GDS ability to catalog and retrieve encrypted files, can add additional security problems, e.g., providing direct cleartext traffic as well as providing the cleartext-cyphertext pair that allows a mischief-maker (assuming that the owner is not using one time pads) to have the means to decrypt subsequent traffic without requiring access to the target machine:

GDS can also retrieve encrypted files. No, it doesn't break the encryption or save a copy of the key. However, it searches the Windows cache, which can bypass some encryption programs entirely. And if you install the program on a computer with multiple users, you can search documents and Web pages for all users.

GDS isn't doing anything wrong; it's indexing and searching documents just as it's supposed to. The vulnerabilities are due to the design of Internet Explorer, Opera, Firefox, PGP and other programs.

First, Web browsers should not store SSL-encrypted pages or pages with personal e-mail. If they do store them, they should at least ask the user first.

Second, an encryption program that leaves copies of decrypted files in the cache is poorly designed. Those files are there whether or not GDS searches for them.

There is a school of thought that users should have been aware that these vulnerabilities were there in first instance. While I am generally supportive of this reasoning, the burgeoning number of apps, many of them having launched with 'misbehaving' characteristics, continue to raise the bar for individual users. As application designers will not improve anytime soon, if you have anything to protect on systems over which you cannot control physical access, you should avail yourself of whatever assistance you can in order to know what is openly accessible, i.e., know what is at risk either directly or indirectly by inference from available search results.

I am of the opinion that one should turn the fiercest attack tools, GDS included, on one's own systems and networks, early and often, so that you find faults before the bad guys find them. Because they are looking:

Some people blame Google for these problems and suggest, wrongly, that Google fix them. What if Google were to bow to public pressure and modify GDS to avoid showing confidential information? The underlying problems would remain: The private Web pages would still be in the browser's cache; the encryption program would still be leaving copies of the plain-text files in the operating system's cache; and the administrator could still eavesdrop on anyone's computer to which he or she has access. The only thing that would have changed is that these vulnerabilities once again would be hidden from the average computer user.

Bad guys aside, I require an unambiguous firewall between GDS and Google web -- which for me is a completely separate tool no matter how good GDS is -- so as to prevent an inadvertent scan and upload of local materials to the Google virtual store. Even if the current version is secure, a coding slip in a subsequent GDS version could see one's data cataloged in an unrecoverable catastrophe. I would also like GDS not to launch in permissive mode, e.g., all functions enabled, but in restricted mode so that users would intentionally have to enable cataloging features.

There is precedent: Google has already erred with GMail and has had to produce more secure versions in order to prevent malefactors from harvesting GMail user IDs. To be fair to Google and, by extension, Microsoft, both firms have strived for functionality over security. It remains to be seen who can successfully reintegrate security while retaining functionality.

Were I a bad guy, I would be extending my standing sweeps of competitors via Google and Google cache in the event that a target's user inadvertently put the wrong thing into the scrum.

Desktop Google Finds Holes
By Bruce Schneier, ExtremeTech
November 29, 2004

Gordon Housworth

Cybersecurity Public  InfoT Public  


  discuss this article

Smart card to single standardized cryptographic token to national identity card


A thoughtful observer of IT systems issues, George Ou, extended the concept of a smart card to a single standardized cryptographic token in Why stop at Single Sign On, why not Universal Sign On?:

Microsoft has the right idea by implementing Smart Cards that not only allow their employees to access their computing resources, but their physical campus as well. But why stop there? Smart Cards are essentially cryptographic tokens that not only enable "something you have" security, but strong authentication using PKC (Public Key Cryptography). A traditional metal key provides "something you have", but it can't provide PKC. PKC is used in most modern Cryptography systems like SSL, S/MIME, or PGP just to name a few. Essentially, it's the strongest form of authentication ever invented and it can also enable strong encryption by providing a secure key exchange.

So why stop at access to the building and computer systems, how about replacing all of the following applications with a single standardized cryptographic token with an integrated finger print reader and/or numeric keypad for good measure.

    • Credit Card and ATM Card replacement
    • Car key replacement
    • House key replacement
    • Building badge replacement
    • Computer and Network login
    • Wireless Access token
    • VPN Access token
    • Un-forgeable passport with Digitally Signed Photo
    • Un-forgeable driver's license with Digitally Signed Photo
    • Un-forgeable Social Security Number with Digitally Signed Photo

Ou closed with a comment to the effect that a user "could just carry a single token to do all that! Maybe on a key chain," which sprang to my mind as a personal, even national, ID card, yet some twenty responses stayed at the technical level of feasibility, thus my contribution:

Without debating the merits of a national identity card, my first read of your post was that it would effectively perform as one given that your 'use cases' describe the substantive core of an individual's interaction with society.

A brief scan of commentary did not, to my mind, flag such a "third rail" application, so I mention it here.

Readers should not take my comment as pro or con, although the current situation of fifty state driver's licenses -- documents originally designed to indicate one's ability to operate a class of motor vehicle that have been increasingly pressed into an ID function -- has been and remains ripe for criminal diversion.

Having lived overseas for many years, I reflexively carry my passport and proffer it here in the US when asked for ID. In a substantive, if not the majority, percentage of cases, I am asked for the less secure, more easily forgeable driver's license instead.

FYI, I would recommend more of Ou's observations as flags for risk. In It's been an hour and my IP phone is still bootingI saw a major security risk in the delay involved in the TFTP boot-up process. Were I a terrorist or criminal, I would look to cause a network drop, or a series of drops, and so deny my target the ability to use their phone system in whole or in part, and to certainly create uncertainty in the minds of users as to which part of the organization would be inop.

I maintain that VoIP is being looked at primarily through the eyes of commercial efficiency and not availability, redundancy, and security. Cheap piggyback architecture and poor implementation will cost firms dearly. They just don't know it yet.

Why stop at Single Sign On, why not Universal Sign On?
George Ou
ZDNet weblog
17 November, 2004

Gordon Housworth

Cybersecurity Public  InfoT Public  Strategic Risk Public  


  discuss this article

Domestic Digital Pearl Harbor driven by offshore criminal and terrorist agents


While I had previously noted that, "Malware (malicious software), phishing, cracking, and social engineering, individually and in concert, increasingly point to the goal of criminal profit," it is increasingly apparent that while US residents remain the most attractive target (due I believe to our volume of ecommerce, the availability of broadband bot targets, and far too many dumb users unable to protect their PCs), the perps are Eastern European gangs. (US organized crime has been slow in comparison in its embrace of cybercrime.) While the US has the largest absolute number of fraudulent transactions:

countries such as the former Yugoslav republic of Macedonia, the African countries of Nigeria and Ghana, and Vietnam are homes of a higher percentage of fraud. [VeriSign] labels any credit card transaction from an IP address sourced in Macedonia as "risky," and more than 85 percent of such transactions from the other three countries are not be trusted.

It is worth remembering that while Dick Clarke was too "often dismissed as a Cassandra while cybersecurity czar," and thus the six trends he identified in October 2003 were received with what I would call polite inattention by IT and government (See Revisiting Clarke's six bleak IT trends from October 2003), all that he forecast has come to pass. Clarke said all six would increase, but the one that would go through the roof was 'Rising identity theft.' Not only has it gone through the roof but it is being used in combination with at least four others: Rising vulnerabilities, Rising patches, Falling "time to exploit," Rising rate of propagation, and Rising cost of cleanup.

Phishing (enticing users are to surrender financial data and passwords to fake Web sites) is being carried out "on a massive scale [such that the] price of a credit card number is dropping into the pennies now." Offshore perps are infecting US PCs with Trojans and worms, turning them into bots and bot nets, which then launch an interstate attack masking the attacker's origin.

One supposes better late than never, but it is still stunning to see the FBI just now publicly begin to say:

Tools and methods used by these increasingly skilled hackers could be employed to cripple our economy and attack our critical infrastructure as part of a terrorist plot. People had to assume that terrorists would seek to hire hackers to "raise money, aid command and control, spread terrorist propaganda and recruit more into their ranks and, lastly and most ominously, attack at little risk.

The Internet could allow attackers to remain anonymous, to strike at multiple targets from a distance and escape detection. Critical infrastructure such as water, power and transportation systems remained vulnerable. In the future, cyberterrorism may become a viable option to traditional physical acts of violence. Terrorists have figured out that we have a technological soft underbelly.

Back in Black hat meets white hat in the Idaho desert, I noted that:

Many "many once-isolated systems used to run railroads, pipelines and utilities are now also accessible via the Internet and thus susceptible to sabotage," as "More and more of these things are being connected to the Internet, so they can be monitored at corporate headquarters. It is generally accepted that the August blackout last year could have been caused by that kind of activity."

The Control Systems Center being built at DOE's INEEL by DHS and CERT is intent on addressing five areas: awareness, incident management, standards collaboration, strategic direction and testing. INEEL's head of national security programs is already on record as saying, "I am confident that there is no system connected to the Internet, either by modem or fixed connection, that can't be hacked into."

Given the disarray at DHS, one hopes that they talk to the bureau.

In Clarke's vision of securing the net, I said that at least a small "p" digital Pearl Harbor was possible, in part, due to the 2003 Federal Computer Security Report Card scored the critical 24 federal agencies into an overall D grade [and] that those still getting an F are the departments of Homeland Security, Energy, State, Justice, Health And Human Services, Interior, Agriculture, and Housing And Urban Development.  (Defense got itself into the D category along with Transportation, GSA, Treasury, OPM, and NASA.)

Many private industry sectors are no better even as they possess the Supervisory Control and Data Acquisition (SCADA) systems that are the C2 for critical infrastructure including electric, gas and oil distribution systems, water and sewer systems, and various manufacturing processes.

It is painful to think of phishing attacks merely being a money-spinning prelude to an infrastructure attack. We've passed the small 'p' and are now on the way to a medium 'p.'

FBI: Hidden threat inside cybercrime
November 10, 2004, 3:54 PM PT

Report: Crooks behind more Net attacks
By Robert Lemos
CNET November 16, 2004, 2:17 PM PT

Gordon Housworth

Cybersecurity Public  InfoT Public  Risk Containment and Pricing Public  Strategic Risk Public  


  discuss this article

Malware, phishing, cracking, and social engineering all point to increasing criminal profit


Malware (malicious software), phishing, cracking, and social engineering, individually and in concert, increasingly point to the goal of criminal profit at the expense of ego and bragging rights. The target's experience of mere inconvenience and indirect loss is now direct loss -- and a lot of it -- along with indirect loss and inconvenience. The better attacks marry two or more of the approaches:

Trojan horses can be used to dupe computer users into running a bot program, which in turn can help launch denial of service attacks for financial gain.

The [Sobig] virus would load software onto users' computers in order to provide a means for bulk e-mailers to use the zombie machines to send out unsolicited messages without detection.

The target is often an avid partner in his or her own demise:

The major issue in Netsky's consistent prevalence is the fact that it rides on the seemingly irremediable human penchant for opening attachments in e-mail messages, even from unknown sources.

People, by nature, are unpredictable and susceptible to manipulation and persuasion. Studies show that humans have certain behavioral tendencies that can be exploited with careful manipulation. Many of the most damaging security penetrations are, and will continue to be, due to social engineering, not electronic hacking or cracking.

Analysts have been watching an unremitting shift "from the traditional goal of claiming fame and notoriety to the pursuit of profit and monetary rewards." Gartner believes that social engineering is a greater problem than hacking:

Criminals are using social engineering to take the identity of someone either for profit, or to gather further information on an enterprise. This is not only a violation of the business, but of someone's personal privacy.

Criminals are zeroing in on the nexus of need, hope and loneliness where the target is most vulnerable. be it targeting the unemployed with "an e-mail that purported to come from Credit Suisse bank advertising a job opportunity" or an updated mail-order bride scam in which a fictitious attractive Russian woman, Ms. Medvedeva, fleeces the lovelorn, literally leaving some waiting at the airport with roses.

One wonders if the scams are so good or the victims so obtuse.  One may wonder about the victim when they read that:

  • 70 percent of consumers will share information, such as their name, address, postal code, phone number, account number or give the answer to a security question to an unsolicited call or email.
  • 61 percent of consumers do not want to be forced to change passwords, a common procedure mandated to enhance security.
  • 57 percent of consumers do not want their accounts locked down after three failed attempts to provide identification verification information.

One can only imagine the collision of these targets with thoughtful spammers who have no intent to see anything:

All they want is to "phish" your credit card number. Messages now zip around the Internet purporting to come from trusted companies and asking you to "verify your account." The victim is taken to a Web site that looks genuine but is run by a fraud ring.

Clearly the weak links are both the users that willingly make one click too many, or surrender information that they should not, and the software vendors that produce faulty code that can be exploited for Trojan, spam and other attacks. 

Human nature will be slow to fix. One can only hope that the software takes less time.

Virus report points to profit-hungry hackers
By Dawn Kawamoto
November 3, 2004

Russian Gal Seeking Comrade? No, It's an Internet Scam
New York Times
November 3, 2004

Old scams pose the 'greatest security risk'
By Munir Kotadia
ZDNet Australia
November 1, 2004

Consumers, not technology, biggest cybersecurity problem
Dan Farber
Oct 27, 2004

The new face of cybercrime
By Phillip Hallam-Baker
Special to ZDNet
July 20, 2004

Gordon Housworth

Cybersecurity Public  InfoT Public  


  discuss this article

Black hat meets white hat in the Idaho desert


DHS is creating a Control Systems Center in cooperation with CERT that "involves industry sectors, control system vendors and outside experts. It will focus on five areas: awareness, incident management, standards collaboration, strategic direction and testing. DOE's Idaho National Engineering and Environmental Laboratory (INEEL) occupies a key role in the effort.

Laurin Dodd, responsible for INEEL's national security programs, observes:

I am confident that there is no system connected to the Internet, either by modem or fixed connection, that can't be hacked into.

Many "many once-isolated systems used to run railroads, pipelines and utilities are now also accessible via the Internet and thus susceptible to sabotage," as "More and more of these things are being connected to the Internet, so they can be monitored at corporate headquarters. It is generally accepted that the August blackout last year could have been caused by that kind of activity."

Steve Schaeffer, of INEEL's cyber security lab, required "about two months before we had enough information to affect the protocol to affect operations" of a General Electric designed system. Schaeffer:

My test was to subvert that guy's system in some manner… If they can dial into the system, guess what, so can I.

An outline for the Supervisory Control and Data Acquisition (SCADA) Test Bed [SCADA systems are the C2 for critical infrastructure including electric, gas and oil distribution systems, water and sewer systems, and various manufacturing processes] indicates that an integrated SCADA Test Bed will have "links to cyber, wireless/communications and physical INEEL assets, [will] test legacy and contemporary SCADA systems, [will] provide commercial, confidential and secure testing and evaluation areas, [and will] develop a SCADA Outreach Program [to] establish a dedicated training facility [for] intrusion detection, data analysis and advanced protection."

INEEL is an applied engineering laboratory managed by Bechtel National as the lead partner in Bechtel BWXT Idaho, LLC, the management and operations contractor. INEEL is DOE's lead laboratory for nuclear energy R&D occupying 890-square-miles of the southeast Idaho desert with four mission areas:

  • Energy - core research in nuclear reactor science and technology for next generation reactors
  • Security - threat solutions for population, infrastructure, and environment
  • Science - chemical, engineering, materials, environmental, medical, and biological
  • Environment - safe, legally compliant environmental cleanup

INEEL was established in 1949 as the National Reactor Testing Station, INEEL was once the site of the world’s largest concentration of nuclear reactors. Fifty-two test reactors, most of them first-of-a-kind, were built and operated, including the US Navy’s first prototype nuclear propulsion plant. Of these, 3 are still operating.

The Snake River Alliance says that of those 52 reactors, "most had meltdowns, either intentionally or unintentionally," and that "from the 50's through the 70's, plutonium-contaminated waste... was buried in shallow unlined pits and trenches [while] high-level liquid waste from reprocessing the Navy's spent nuclear fuel to recover weapons grade uranium was stored in underground tanks... contaminating the soil and groundwater," but then no one's perfect.

This combination of infrastructure growth and protection can come none too soon as over "the next 20 years, electricity demand is expected to increase 40 percent in the United States and 70 percent globally. To ease the impact on global climate, much of this new electricity production is likely to come from nuclear energy, the only existing technology that can generate large amounts of electricity without also emitting greenhouse gases."

INEEL and ANL (Argonne National Laboratory) are leading the US effort to develop the Generation IV nuclear reactors:

The first generation was the early prototype reactors of the 1950s and ‘60s. The second was the large commercial power plants built in the 1970s and still operating today. Generation III, developed in the 1990s with evolutionary advances in safety and economics, is being built today, primarily in eastern Asia. Until about 2030, new plants will mainly be Generation III designs. The Generation IV nations [US, UK, Japan, Canada, Argentina, Brazil, France, Switzerland, ROK (Republic of Korea), RSA (Republic of South Africa)] plan to develop nuclear energy systems for construction and operation around 2030, when many of the world’s existing nuclear power plants will be at or near the end of their operating lives. To succeed in the international marketplace, "Generation IV technologies [using a closed fuel cycle] will need to provide safe, reliable and economical electricity, while reducing the amount and toxicity of nuclear waste and minimizing the risk of nuclear proliferation.

Add in the geopolitical threats to global energy supplies, and on all accounts we can only wish INEEL good luck.

Hackers Join Homeland Security Effort
By Adam Tanner/Washington Post
09/15/04 7:45 AM PT

New DHS Program Aims to Bolster Security of Computer Control Systems
By Tim Starks, CQ Staff
Aug. 18, 2004 - 7:45 p.m.

Gordon Housworth

Cybersecurity Public  InfoT Public  Infrastructure Defense Public  Strategic Risk Public  


  discuss this article

Were guns, gates, and guards sufficient at Athens and Boston or were they fortunate?


The "unprecedented security" of the 2004 Athens Olympics was both visible and invisible:

[G]uards and soldiers, and gunboats and frogmen patrolling the harbors... a system of 1,250 infrared and high-resolution surveillance cameras mounted on concrete poles. Additional surveillance data was collected from sensors on 12 patrol boats, 4000 vehicles, 9 helicopters, 4 mobile command centers, and a blimp. It wasn't only images; microphones collected conversations, speech-recognition software converted them to text, and then sophisticated pattern-matching software looked for suspicious patterns. 70,000 people were involved in Olympic security, about seven per athlete or one for every 76 spectators.

Supporting this human and machine sensor array was an IT system that:

underpins the admission of athletes, visitors, and other people and handles the logistics of moving athletes around to the various venues in time for posted events in Athens. [To prevent] hijacking of the IT system or any of its components [each] system has at least two cloned backup systems. Even the data center, situated in a top-secret Athens location, has a twin that's remotely located and tasked with protecting the information and information flow even during an earthquake… Intrusion points like USB ports and removable storage drives were eliminated… [E]verything is equipped for anti-virus, firewall, and intrusion-detection functions.

Security command centers were built "that act as a command and control hub for Greek police, fire departments, armed forces, coast guard, and first aid."

Despite this defensive phalanx, a reporter from the UK Sunday Mirror proved otherwise:

First, he got a job as a driver with a British contractor. He provided no references, underwent no formal interview or background check, and was immediately given access to the main stadium. He found that his van was never thoroughly searched, and that he could have brought in anything. He was able to plant three packages that were designed to look like bombs, all of which went undetected during security sweeps. He was able to get within 60 feet of dozens of heads of state during the opening ceremonies.

And in another bit of "feel-good" security:

Where a six-lane road passes the Ikea-style Olympic complex about 15 miles away, there is a line of empty plastic barriers that should be filled with sand but are not, forming a protective layer of Lego blocks for the common folk inside.

And in an example of inconsistent security precautions:

Up a hill, a block from the guarded gate to the Queen Mary 2, a one-lane road traces a bluff overlooking the grand ship. Standing on the hill, with only a chain fence to obstruct the view, you can almost touch the buses full of dignitaries passing below, pick out passengers in the distance and, apparently, take notes on all the security machinations without being bothered.

The Democratic National Convention in Boston forgot to lock down MedFlight 2, which flew to the hospitals adjacent to the Fleet Center and a general "36-mile no-fly zone" around the Fleet Center was useless feel good security as a 7300-pound MedFlight Dauphin II flying over 200 mph with 2000+ pounds of cargo and 350 gallons of fuel would cover "the entire six-mile by six-mile "no-fly zone" in less then 90 seconds." "In fact an attacker in a helicopter located 20 miles away could start its engine, take off, hug the road (fly low on the Interstate) and slam into the Fleet Center before the FAA controllers even knew there was a bird in the air." If the shooter is in motion, i.e., has taken control of the helicopter, target survivability is low. And who in those few seconds would want to shoot down what might be a clearly marked medical flight entering a dense hospital area?

Access to the Fleet Center using dirt-filled dump trucks and Jersey barriers was more feel good security as these trucks are used to haul salt and weight under 25,000 lbs fully loaded and they can be bounced by a "20- or 24-foot-long U-Haul truck filled with drums of water driven at a speed of 30-50+ MPH." And what if the drums had nitrate-diesel fuel instead of water?

Authorities and suppliers in both venues will brag of the outcome, but just because they escaped disaster does not prove that those precautions were valid, cost-effective, or worthy of being replicated elsewhere.

Time will tell, but with all the porosity in these shields, I wonder if we were merely lucky.

Security at the Olympics
by Bruce Schneier
Crypto-Gram Newsletter
September 15, 2004

Athens Security: Seeing Isn't Believing
The New York Times
August 13, 2004

Securing The Games
Rob Brownstein
Electronic Design ED Online ID #8484
August 9, 2004

Gordon Housworth

Cybersecurity Public  InfoT Public  Terrorism Public  


  discuss this article

Homo boobus and social engineering: When the nut behind the wheel is loose


Homo boobus is one of my favorite creations, the person for whom Murphy's Law was made and whose more spectacular appearances are usually preceded by "Hey, watch this!" The most audacious members of the specie go on to posthumously win the Darwin Award.

He or she is also the person that sysadmins have seen who "click the email attachments (when they KNEW it was a virus) "just to see what it would do"." In the future you may have the opportunity to know them much better as the family of socially engineered attacks commence with "Drag-and-infect."

Drag-and-infect is a case of drag-and-drop that allows an "attacker [to use the flaw to] install a program on a victim's computer after convincing the person to visit a malicious Web site and click on a graphic." The malicious website would be set up to lure homo boobus to actually drop a program into the victim's startup folder which would then execute when the PC was restarted.

I do not agree with Microsoft's position that the flaw "did not pose a serious risk to users because it requires an attacker to trick people into visiting a Web site and taking some action at the site." Just think how a virus, as opposed to a worm, propagates; a user has to do something, has to intervene, which they do with regularity. It is believed that drag-and-infect can be reduced to a single click, thereby making the exploit much more prevalent.

I very much agrees with the comment of the flaw's discoverer who embedded a general compliment to Microsoft in saying, "The patch [for XP] really does lock down the machine nicely, and whatever anyone finds now will be completely different to the previous year's findings."

Enter the age of Homo boobus. If and when software providers do make their apps more robust, hackers and crackers will shift to the weakest link and they will do it quickly and in novel ways that sail past the constructs meant to stop them.

Consider the novel manner in which spammers have gotten around the use of a graphic with combinations of ornate letters and numbers that is used to defeat spambots and so insure that the replier is a person: the graphic is trapped and sent to sites where visitors can gain access to erotic materials by entering the correct alphanumeric string for the spammer to use. With the meter running, homo boobus translates one graphic after another to gain more access.

For the geeks among readers, go here and here for evidence of spambot evolution.

A discussion has commenced regarding the responsibility of a vendor such as Microsoft to insulate any and all users from such threats. It is interesting that some of the early SP2 XP flaws are seen as requiring "so much social engineering that holding Microsoft responsible was an "unrealistic expectation."" I do not think that the limit will hold for long, given the creativity of hackers and the propensity of homo boobus to click on anything -- and without that understanding, the responsibility discussion may not go far enough.

Secunia rates this flaw as "highly critical," its second-highest rating of vulnerability threats. I agree and believe that as apps become more robust, hackers will exploit this class sooner than later.

Earlier appearances of Homo boobus:

Drag-and-drop flaw mars Microsoft's latest update
By Robert Lemos
August 20, 2004, 1:04 PM PT

IE flaw under SP2: User’s problem or Microsoft’s?
Posted by david.berlind @ 9:18 am (PDT)
Monday, August 23 2004

The Fastest Man on Earth (Overview and Index)
Why Everything You Know About Murphy’s Law is Wrong
by Nick T. Spark and
Los Angeles, California

Online porn often leads high-tech way
By Jon Swartz
March 9, 2004

Gordon Housworth

Cybersecurity Public  InfoT Public  


  discuss this article

Tenderness of the fabric of our internet


The tenderness of the internet and its architectural elements upon which we depend has again been shown in a series of events. First, a SANS Internet Storm Center report showed that an unprotected out-of-the-box PC is really no PC at all in that its survival time to malware compromise from the internet is now 20 minutes, down from 40 minutes in 2003. It is instructive to check out the chart on SAN's Survival Time History. The Catch 22 is that the time to download critical patches exceeds the average survival time -- which is shorter than average if you are a university user or broadband user (high risk) and longer if your ISP closes off key attack ports (lesser risk).

I am increasingly leaning to quarantining both corporate and individual users whose PCs are not properly patched. Yes, I know that demand surges would occur at patch times, users would get angry, sysadmins would face daunting support demands, the patch process as we know it would have to adjust if not change, and vendrs would have to produce stable tested apps, but why not raise the ante on MS and Linux to produce more secure products faster.

It is not as if firms as Microsoft are unaware of the rising risk. An MS security consultant noted that "the day is likely to come when a virus or worm brings down everything… [no one] will have time to detect it [or] have time to issue patches or virus definitions and get them out there." This shows that patch management is not the be-all and end-all… If the human body did patch management the way (companies do), we'd all be dead."

If you can find comfort in that, you are more sanguine than I.

Software was historically released in paired cycles, the 'first for function and the second for speed' implying the optimization of the preceding functional release. That second release now is all important for patches as much or more than performance optimization. The "service pack 2" release or SP2 of Windows XP is not offering me further comfort as a variety of security and vulnerability-assessment firms are already finding critical flaws which observers believe will presage "worms that will circumvent SP2 features over the next few months"

To be fair, SP2 was intended to "add better security to the operating system's handling of network data, program memory, browsing activity and e-mail messages" rather than remove all faults in XP code, but this was a release a year in the making and touted for its security functions. MS had to expect that despite whatever the firm said that the release would be viewed as Caesar's wife and had to be above reproach. Not being privy to MS development demands and restraints in getting the release out, I can only note that I did not feel that the release went far enough in its new embedded firewall capacity, specifically not blocking outbound traffic that would be a signal that a worm or virus had struck the PC and was now intending to replicate itself. XP's firewall shares a flaw common to most software firewalls in that it can be "circumvented by any locally running program." (We use hardware firewalls precisely to get around this kind of limitation as "Once hostile code has gained root access to your system, you've already lost. Any firewall can be easily disabled or circumvented with only a few lines of code.")

I think that MS should consider making an effective, low-cost hardware firewall just as they made mice and keyboards. In the latter case, MS wanted to insure reliable I/O (input/output) devices for PCs running their software. I think that the same yardstick should be applied to the former to insure reliable inbound/outbound data flows from and to the internet.

I will close on the ancillary issues of digitally signing -- contractually binding -- an electronic transmittal, and certifying that documents or code blocks posted to the web have not been tampered so as to change the content or insert new code -- trap doors -- to permit downstream mischief by any subsequent user that downloads that code.

If you are using MD5 hash algorithm for digital signatures, a likely case due to its popularity, or SHA-0, move now before exploits propagate. While the near zero-day exploit time for certain classes of worms and viruses is presumably too short at the moment for this exploit class as the perp must write a specific backdoor cloaked with the same hash collision, it is only a matter of weeks not years. Due as much to lack of familiarity as difficulty, I would expect early exploits to come from crackers (criminal hackers) and state-sponsored entities seeking to attack commercial sites in order to spoof and then gain access to IP, if by no other means than by inserting trap doors in tools that will be used in the production and storing of future IP.

Also at risk is authentication of already publicly stored source code that used MD5 to certify that it is has not been tampered with, as on open-source Apache servers, i.e., existing things are as much as risk as newly authored or signed items.

And note the progress of the attack on SHA-1, also used in PGP, which we use for secure client communications, as well as some signature algorithms. Still safe, but larger parallel arrays and better algorithms continue to narrow our margin of safety.

Study: Unpatched PCs compromised in 20 minutes
By Matt Loney and Robert Lemos
ZDNet (UK)
August 17, 2004, 12:22 PM PT

Pros point to flaws in Windows security update
By Robert Lemos
August 18, 2004, 12:47 PM PT

Crypto researchers abuzz over flaws
By Declan McCullagh
August 17, 2004, 9:10 PM PT

Gordon Housworth

Cybersecurity Public  InfoT Public  


  discuss this article

Harbinger of wakeable "unexpected" data-leaks in uncommon packages: Coke's "Unexpected Summer" sweepstakes


For readers unfamiliar with a DoD SCIF, it is a Sensitive Compartmented Information Facility providing formal access controls approved by DCI and holding classified "information concerning or derived from intelligence sources, methods, or analytical processes." SCIFs commonly require data vaults impervious to all environmental threats (from fire to EMP), redundant (quadruple levels are unheard of) power and cooling, 24/7/365 site perimeter and building security, and encryption of any data transmitted from it.

An installation of serious intent and, in our parlance, a data citadel worthy of attack. While items such as personal electronic devices, miniature mass storage devices, 2-way transmission devices, and camera/video phones are off-limits to SCIFs and increasingly more so in varying degrees to commercial facilities, I find the "device-type" of Coca-Cola's "Unexpected Summer" sweepstakes ad campaign to be an interesting harbinger of data-leaks in uncommon packages.

While Coke has downplayed the risk, saying that security concerns are unfounded and that "It cannot be an eavesdropping device," the cans are GPS tracked to within about 50 feet anywhere in the US and that there is a "voyeuristic bonus" in that viewers can watch the tracking of cans that have been called in at the sweepstakes' site. These special cans were engineered by Airo Wireless to look and feel like regular cans and be concealable inside multipacks of Coke varieties, the "only real challenge" Airo had was to "take the technology we had and get it to fit into the size and weight of a Coca-Cola can."

I would also question Coke's contention that the device "can only call Coke's prize center [and data] from the GPS device can only be received by Coke's prize center." If all logic is burned to firmware, this assertion may be correct up to the extra effort of logic chip substitution, but if more such devices enter the market it is very likely that they will use a commodity programmable chip set that can be reloaded with new instructions or additional hidden instructions so that the device 'works as advertised' while performing other illegal or intellectual property diversion steps.

That fact that Airo states Coca-Cola uses Airo's mapping software as well as its GPS devices "to pinpoint the exact location" and that "Once the winning cans have been found and activated, their locations can be viewed at," would seem to contradict Coke's assertions. As Airo has multiple GPS/cellular technology applications including "custom equipped phones with EKG (electrocardiogram) monitors for heart patients, emergency devices for executive global protection, roadside assistance, house arrest monitoring, emergency phones for victims of domestic violence and services for family protection," I would assume that these are more to the programmable commodity side that a hard-wired one-off design.

By mere commercial intent or malicious collector activity, one should expect to see more 'wakeable' devices that communicate in various ways to external sources, yet do not look the part from a security inspection standpoint. While this Coke device may not be harmful, other more expanded, or hijackable, items that follow may well be.

While articles had passed my desk on these special cans, it was interesting to see a SCIF-specific handling notice about them:

The Coca Cola Company has a summer game promotion running from 5/17 - 7/12/04 in all 50 states and the District of Columbia that has the capability to compromise classified information. The company has intermixed approximately 120 Coca-Cola cans that actually contain GPS locators equipped with a SIM card, keypad and GPS chip transponder so it functions as a cell phone and GPS locator. The cans are concealed in specially marked 12, 18, 20, or 24 can multi-packs of Coca-Cola Classic, Vanilla Coke, Cherry Coke and Caffeine Free Coke. The hi-tech Coke "Unexpected Summer" promotion can has a button, microphone, and a tiny speaker on the outside of the can. Pressing the larger red button starts the game in process, thus activating the GPS signal and a cell phone used by the customer to call a special hotline. Consumers who find these cans, activate the technology, and call the hot line must agree to allow Coke "search teams" using the GPS tracker (accurate to within 50 feet), to surprise them anyplace, anytime within three weeks to deliver a valuable prize.

In accordance with DIA, no specific policy for this promotion will be issued. However, DISA employees with access to SCIFs should take a common sense approach and if one of these cans are found inside a SCIF, they should treat it as they would any two-way electronic device in a SCIF and remove it immediately. Until such time as this sales promotion ends and all 120 cans are accounted for, Coca-Cola packages should be opened and inspected before taking them into any area marked as a" Restricted Area" or classified meetings/discussions, etc. are in progress or have the potential to occur at any time.

Coke sneaks phones, GPS chips into cans
By Theresa Howard, USA TODAY
Posted 5/9/2004 9:27 PM Updated 5/9/2004 9:30 PM

Coca-Cola promotion prompts security measures
Some military bases on edge over cell phones, GPS chips in cans
The Associated Press
Updated: 6:39 p.m. ET July 1, 2004

Gordon Housworth

Cybersecurity Public  InfoT Public  


  discuss this article

Prev 1  2  3  4  [5]  6  7  8  9  Next

You are on page 5

Items 41-50 of 89.

<<  |  May 2020  |  >>
view our rss feed