Dec 27, 2007

NMAP 4.50 Release

Nmap was first released in 1997, so this release celebrates the 10th anniversary. Major new features since 4.00 include the Zenmap cross-platform GUI, 2nd Generation OS Detection, the Nmap Scripting Engine, a rewritten host discovery system, performance optimization, advanced traceroute functionality, TCP and IP options support, and nearly 1,500 new version detection signatures.

The Nmap Changelog describes 320 improvements since 4.00 in more than 1,500 lines. Here are the highlights:

Zenmap graphical front-end and results viewer
Zenmap is a cross-platform (tested on Linux, Windows, Mac OS X) GUI which supports all Nmap options. It allows easier browsing, searching, sorting, and saving of Nmap results. Zenmap replaces the venerable but dated NmapFE, which was the default Nmap GUI for more than 8 years.
2nd Generation OS Detection
Nmap revolutionized OS detection when the feature was first released in October 1998, and it served us well for more than 9 years as the database grew to 1,684 fingerprints. The new 2nd generation system incorporates everything we learned during those years and has proven itself more effective and accurate. The new database has 1,085 signatures, ranging from the 2Wire 11701HG wireless ADSL modem to the ZyXEL ZyWall 2 Plus firewall. In addition to more than 500 general purpose OS fingerprints, it contains 94 switches, 92 printers, 81 WAPs, 63 broadband routers, 31 firewalls, 19 VoIP phones, 16 webcams, 8 cell phones, and more. Nmap currently only have fingerprints for 1 ATM machine and 2 game consoles. The new system is extensively documented.
Nmap Scripting Engine
The Nmap Scripting Engine helps change that by allowing users to write (and share) simple scripts to automate a wide variety of networking tasks. Those scripts are then executed in parallel with the speed and efficiency you expect from Nmap. Users can rely on the growing and diverse set of scripts distributed with Nmap, or write their own to meet custom needs. Nmap 4.50 includes 40 scripts ranging from simple (showHTMLTitle, ripeQuery) to more complex (netbios-smb-os-discovery, SQLInject, bruteTelnet). An NSE library system (NSELib) allows common functions and extensions to be written in LUA or C. NSE can efficiently handle normal TCP or UDP sockets, or read and write raw packets using Libpcap. The system and API are extensively documented. You can try NSE (along with other features) out by adding the -A option to your Nmap command-line.
Performance and accuracy improvements
Not only were the host discovery and OS detection systems completely replaced, but Nmap improved the port scanning algorithms in the process. We also optimized the configure scripts and removed a lot of dead code to improve compile times and reduce the distribution size. Another performance boost came from ignoring certain rate-limited ICMP error messages in cases such as SYN scan where the ICMP error means the same as the lack of any response does anyway.
Version detection enhancements
It allows Nmap to determine the service listening on a port using protocol communication rather than making assumptions based on port number. In addition to the service name, the system can also often deduce other information such as application name, version number, device type, operating system, and more. The DB has grown more than 40% since 4.00 to 4,542 signatures representing 449 services. The service protocols with the most signatures are http (1,473), telnet (459), ftp (423), smtp (327), pop3 (188), http-proxy (111), ssh (104), imap (103), irc (46) and nntp (44).
Host discovery (ping scanning) system rewritten
The old host discovery system (massping()) was removed and the primary port scanning engine (ultra_scan()) augmented to support host discovery. The new system is more accurate, and in some cases faster. We removed the artificial limits on the number of ports and protocols (such as -PS arguments) which can be used for discovery. A new IP protocol ping type (-PO) was added which sends IP headers with your specified protocol numbers in the hope of eliciting a response.
--reason explains why a port is open/closed/filtered
The new --reason option adds a column to the Nmap port state table which explains why Nmap assigned a port status. For example, a port could be listed as “filtered” because no response was received, or because an ICMP network unreachable message was received. With --reason, you can find out which was the case without digging through --packet-trace logs.
Advanced traceroute support
Nmap now offers a --traceroute option which uses Nmap data to determine which sort of packets are most likely to slip through the target network and produce useful results. The system is well optimized for speed and bandwidth efficiency, and the clever output system avoids repeating the same initial hops for each target system. The -A option now includes traceroute.
TCP and IP Options
Nmap now supports IP options with the new --ip-options flag. You can specify any options in hex, or use “R” (record route), “T” (record timestamp), “U” (record route & timestamp), “S [route]” (strict source route), or “L [route]” (loose source route). Specify --packet-trace to display IP options of responses. For further information and examples, see this post. TCP options are now reported by --packet-trace too.
Other changes to enjoy in Nmap 4.50:
  • Added the --open option, which causes Nmap to show only open ports. Ports in the states “open|closed” and “unfiltered” might be open, so those are shown unless the host has an overwhelming number of them.
  • The --scanflags option now also accepts “ECE”, “CWR”, “ALL” and “NONE” as arguments.
  • The new --servicedb and --versiondb options let you specify a custom Nmap services (port to port number translation and port frequency) file or version detection database.
  • IP Protocol scan (-sO) now sends proper protocol headers for TCP, UDP, ICMP, and IGMP.
  • Improved nmap.xsl, which is used to transform Nmap XML output into pretty HTML reports.
  • Added the --unprivileged option, which is the opposite of --privileged. It tells Nmap to treat the user as lacking network raw socket and sniffing privileges. This is useful for testing, debugging, or when the raw network functionality of your operating system is somehow broken.
  • Nmap now allows multiple ignored port states. If a 65K-port scan had, 64K filtered ports, 1K closed ports, and a few dozen open ports, Nmap used to list the dozen open ones among a thousand lines of closed ports. Now Nmap will give reports like “Not shown: 64330 filtered ports, 1000 closed ports” or “All 2051 scanned ports on 192.168.0.69 are closed (1051) or filtered (1000)”, and omit all of those ports from the table. Open ports are never ignored.

Dec 12, 2007

UnicornScan

Unicornscan is a new information gathering and correlation engine built for and by members of the security research and testing communities. It was designed to provide an engine that is Scalable, Accurate, Flexible, and Efficient. It is released for the community to use under the terms of the GPL license.

BENEFITS
Unicornscan is an attempt at a User-land Distributed TCP/IP stack. It is intended to provide a researcher a superior interface for introducing a stimulus into and measuring a response from a TCP/IP enabled device or network. Although it currently has hundreds of individual features, a main set of abilities include:
  • Asynchronous stateless TCP scanning with all variations of TCP Flags.
  • Asynchronous stateless TCP banner grabbing.
  • Asynchronous protocol specific UDP Scanning (sending enough of a signature to elicit a response).
  • Active and Passive remote OS, application, and component identification by analyzing responses.
  • PCAP file logging and filtering.
  • Relational database output.
  • Custom module support.
  • Customized data-set views.

Get it from http://www.unicornscan.org/


Dec 11, 2007

Zero-day Flaw in HP Laptop

From http://www.anspi.pl/~porkythepig/hp-issue/kilokieubasy.txt

Advisory:
Multiple Hewlett-Packard notebook series are prone to a remote code execution attack. The manufacturer's preinstalled software contains a critical flaw within the software built to support one-touch button quick feature access.
Overview:
Software called "HP Info Center" is shipped with almost every HP laptop model for few years. It is designed to support user with quick system information and hardware configuration using single button touch. One of its ActiveX controls deployed by default by the vendor has three insecure methods that allow a malicious person to target the HP notebook machines for a remote code execution and remote registry manipulation based attacks.
Impact:
  • Remote code execution
  • Remote system registry read/write access
  • Remote shell command execution

Oct 10, 2007

Open Source Alternative

Recently I discovered this web site called Open Source Alternative.

Open Vs. Closed
Find open source alternatives to your favourite commercial products. Browse through our software categories and compare pros and cons of both commercial products as well as open source software.

Why open source
By choosing an open source product, the user obtains a number of advantages compared to commercial products. Besides the fact that open source is always available for free, it is a transparent application, in that you are invited exclusively behind the scenes to view all source code and thereby to suggest improvements to the product. Furthermore, every product is covered by a large dedicated network, or community, who is more than willing to answer any questions, you may have.

Sep 25, 2007

Colors in Security

This is what I collected from the http://taosecurity.blogspot.com/2007/09/security-jersey-colors.html:
  • Red Team: A Red Team is an adversary simulation team. The Red Team attacks the asset to meet an objective. This activity is called penetration testing in the commercial world.

  • Blue Team: A Blue Team is a security posture assessment and evaluation team. The Blue Team determines the vulnerabilities and exposures of an enterprise. This activity is called vulnerability assessment in the commercial world.

  • White Team: A White Team (or usually a "White Cell") controls the environment during an exercise. The White Cell provides the framework in which the Red Team attacks friendly forces. (Note that in some situations the friendly forces are called the "Blue Team." This is not the same Blue Team that conducts vulnerability assessments and evaluations. Blue in this case is simply used to differentiate from Red.)

  • Green Team: The Green Team is usually a training group that helps the asset owners. Alternatively, the Green Team helps with long-term vulnerability and exposure remediation, as identified by the Blue Team. These descriptions are open for discussion because I haven't seen too many green team activities.
In addition, I would also like to add in a couple more teams.
  • Black Team: The Black Team is supposedly for forensics and investigation. I choose this color because it matches with "Black Box" found in all aeroplanes.

  • Brown Team: The Brown Team is dedicated to Incident Response Team. They in-charge of everything during emergency and act/react to bring the situation under control.
P/S: How come it seems similar to 6-Hat Thinking?

Aug 18, 2007

Intrusion Detection In-Depth

SEC503: Intrusion Detection In-Depth delivers the technical knowledge, insight, and hands-on training you need to defend your network with confidence. You will learn about the underlying theory of TCP/IP and the most used application protocols, such as HTTP, so that you can intelligently examine network traffic for signs of an intrusion.

The hands-on training (Aug. 05 - Aug. 10 2007) in SEC503 is intended to be both approachable and challenging for beginners and seasoned veterans. There are two different approaches for each exercise. The first contains guidance and hints for those with less experience, and the second contains no guidance and is directed toward those with more experience. In addition, an optional extra credit question is available for each exercise for advanced students who want a particularly challenging brain teaser. A sampling of hands-on exercises includes the following:

  • Day 1: Hands-On: Introduction to Wireshark
  • Day 2: Hands-On: Writing tcpdump filters
  • Day 3: Hands-On: IDS/IPS evasion theory
  • Day 4: Hands-On: Snort rules
  • Day 5: Hands-On: Analysis of three separate incident scenarios
  • Day 6: Hands-On: The entire day is spent engaged in the NetWars: IDS Version challenge

Link: Network Intrusion Detection | SANS SEC503 | Intrusion Detection Training

Jul 4, 2007

iPhone root Password Cracked

From Hackint0sh
We managed to obtain and crack the hashs of the user passwords for the iPhone OS. The password for root is “alpine”; The “mobile” user accounts password is “dottie”.

Is it sick to have root pasword to all iPhones worldwide? Well not really, there is no terminal yet to login.

Jun 14, 2007

A Whitehat Hacker

Vista Recovery Command Prompt

Did you know that the Command Prompt tool found in Vista's System Recovery Options doesn't require a User Name or Password? And that the Command Prompt provides Administrator level access to the hard drive? For multiple versions of Windows?

All you need is a Vista Install DVD and you're all set to go.
  • Just boot from the DVD and select the Repair option.
  • Then select the Command Prompt.
  • And you'll end up with an Administrator priviledged Command Prompt.
Interesting. You can find more details from Mr. Kimmo Rousku.

This kind of reminds us of a Windows XP Home feature. The Administrator account password for XP Home is blank by default and is hidden in Normal Mode. But if you select F8 during boot for Safe Mode, you can access the Administrator account and have complete access to the computer.

Physical security of your computer is paramount.

Jun 4, 2007

Magic Numbers or Snake Oil?

The Common Vulnerability Scoring System

Can a single number sum up the full significance of a security vulnerability? The CVSS attempts to prove that it can, but it has its weak points.

The Common Vulnerability Scoring System (CVSS) is a relatively new attempt at consistent vendor-independent ranking of software and system vulnerabilities. Originally a research project by the US National Infrastructure Advisory Council, the CVSS was officially launched in February 2005, and hosted publicly by the Forum for Incident Response and Security Teams (FIRST) from April 2005. By the end of 2006 it had been adopted by 31 organisations and put into operational service by 16 of these, including (significantly) the US-CERT, a primary source of vulnerability information.

The CVSS attempts to reduce the complicated multi-variable problem of security vulnerability ranking to a single numerical index in the range of 0 to 10 (maximum), that can be used in an operational context for such tasks as prioritising patching. It uses a total of 12 input variables each of which can take from two to four alternative pre-determined values. The calculation is broken into three cascaded stages, the first two of which yield visible intermediate results each of which is fed into the following stage. This three-stage process consists of an absolute Base Score calculation that describes ease of access and scale of impact, followed by a Temporal Score calculation that applies a zero to moderate negative bias depending on the current exploitability and remediation position (both of which may well change over time), and, finally, an Environmental Score calculation that is performed by end users to take into account their individual exposure landscape (target space and damage potential).

The third (Environmental) stage has the greatest influence on the final result, and without it a CVSS ranking is really only a partial index. Therefore it must be recognised that a published CVSS score is, unlike the public conception of common vendor rankings (e.g. Microsoft "Critical"), not the final answer. Of course in reality they are not final indices either. The end user should always expect to complete the ranking process by applying some kind of environmental calculation to any published index to allow for local priorities, and the task becomes very difficult where vendor-specific rankings are derived using differing proprietary methods. In the case of the CVSS, a maximal Temporal score of 10 may be reduced by the Environmental calculation even to zero, or alternatively even very low Temporal scores raised up to around 5, once the user's exposure landscape is taken into account. The second condition is significant, as, while nobody would ignore, for example, a Microsoft "Critical" rating, vulnerabilities classified as low priority by vendors could have major impact on certain users, depending on the criticality of the vulnerable systems to their specific business processes.

The Good, the Bad and the Ugly

So what are the pros and cons of the CVSS? On the positive side, it attempts to formalise and objectify the decision process applied to a very complicated problem, potentially improving consistency, both over time and across platforms, vendors and products. It is quite simple, and the input variables and their pre-determined alternative numerical values in the main appear well chosen. It is transparent in that its mechanism is publicly documented. It breaks new ground in attempting to include formal recognition of the user's all-important exposure landscape. But on the other hand, no system is better than its inputs. Choices have to be made as to which value of each variable to select, and the quality of the result depends entirely on the quality of all the choices that lead to it. These choices are externally expressed in natural language in the available calculators. Fortunately, the alternatives contributing to the Base and Temporal scores are relatively unambiguously expressed, and as these decisions will normally be made by experienced security specialists in reporting organisations, the opportunity for significant error is minimised.

However, while the inclusion of the environmental component in the calculation is one of the greatest potential strengths of CVSS, it could also prove to be its Achilles' heel. Not only does the Environmental calculation have the greatest single influence on the final score, but the values of the two variables that contribute to it (collateral damage potential and target distribution) are expressed as "low", "medium" and "high": a notoriously subjective classification system. Poor decisions here will lead to serious errors that can completely undermine the quality of the more objective earlier stages. Furthermore the techno-centric thinking of the originators of CVSS is most apparent here. The guidance notes describe these two environmental variables solely in terms of the percentage of hosts that are vulnerable and the potential for physical damage. This completely misses the point of the differing business criticality of individual systems, which cannot in the real world be assessed "off the cuff" by technical personnel alone.

How can I use the CVSS?

Given the above, how can you currently use CVSS in the real world? In its most basic application (ignoring for now the questionable Environmental parameters), the published Base or Temporal scores for the vulnerabilities in hand at any given moment should simply be sorted into descending numerical order and addressed as swiftly as possible from the top of the list downwards, whatever the actual range or absolute values of the scores. Treat it as a relative rather than an absolute ranking system and get on with the job of patching on a continuous basis. Of course the list and its order really have to be updated regularly as new bugs are announced. This is a completely different approach from the widely advocated calendar-interval regime: "patch Tuesday", "medium severity = 1 to 4 weeks", which is of course in reality patch team workload management not corporate exposure minimisation (but of course we all really know that, even if we take the easy way out in practice).

Whichever of the two you choose, it is important to be consistent in always using either the Base or Temporal score in such a simple application, and the Temporal score is to be preferred as it partially reflects whether a fix is available to be implemented. Despite the familiar tendency to bracket ratings into such categories, "critical", "medium", "low", this is not useful given the extra detail offered by the numerical scoring. How do you prioritise among a dozen simultaneous "criticals"? The quite granular numerical scoring method makes it much less likely that a significant number of vulnerabilities on your current list will have exactly the same ranking. Plus, it is a transparent system. You can often see how the score was arrived at, so you might learn something of use for the future.

At a more sophisticated level, the relationship between the Base and Temporal scores can be used to extract further guidance. If the two scores are essentially identical (within 5 per cent or so) this generally indicates that you are more exposed than if the Temporal score is lower than the Base score by ,say, 10 to 30 per cent. It means that a viable exploit exists and limited (or no) remediation is available. A bigger difference in the scores indicates that exploits are to some degree unproven or imperfect and/or that a fix at some level is available. So diverging Base and Temporal scores are a flag that the vulnerability should be reviewed to find out the new state of play, and the vulnerability may have to be moved up or down your priority list. This obviously depends on your sources of intelligence updating the Temporal scores, but supposing the information is available, somewhat better prioritisation can result.

The Environmental score, although at present primitively implemented, can be used to some extent but the existing parameters will tend to return scores on the low side in non-homogeneous environments where individual systems are business critical or where the landscape is not dominated by a small number of platforms or products. It should only be applied by newbies where too many of the Base (or Temporal) scores in the sorted list have the same value and are therefore not effectively ranked, and then only with caution, as local homework will be needed to validate the results.

Better results can be made of the Environmental score if you are prepared to redefine its input parameters to suit your business context. Selection of the appropriate collateral damage parameter must include the cost to the business of a successful exploit, not just the cost of technical damage and remediation. Choice of target distribution parameter must include the business significance of the breached asset: it may be the only server in a couple of hundred that is running a given system, but if that system is business critical the extent of the exposure is much greater than 0.5 per cent. However, unless you already have considerable detailed business intelligence at your fingertips it is probably dangerous at present to rely on the Environmental score, given its large effect on the final result. This is where we most look to the CVSS developers to improve the system. For now, environmental considerations will for the most part probably remain "seat of the pants". However, supposing revision of the Environment score calculation gets due attention, it promises to become a very powerful tool.

The way forward for the CVSS

So the CVSS has considerable potential as a simple and effective method for vulnerability ranking, but it needs further work to make it more user-friendly and to render the Environmental score more robust and meaningful. The Environmental score parameters need to be redefined to include business impact, which is something that should ideally be done by the CVSS developers rather than ad hoc by individual end users. It is likely that the Environmental score calculation will have to become more sophisticated before its true worth emerges. But from the functional perspective probably the most significant omission is that all the approved calculators currently expect the whole calculation process to be performed in a single operation by selection of the complete set of natural language parameters. None of them allow the end user simply to enter a published numerical Base or Temporal score from which to derive a local Environmental score. At this time the calculation that is most important to the end user must be done "by hand" unless an advisory happens to list the parameters used to derive the published score.

Overall, the CVSS is a relatively untried system but one which, by virtue of its transparency, potentially contains less snake oil than the closed ranking systems we are used to. We must hope that it will evolve over time into a robust universal standard: something that is much needed in this field.

See also:

The secrets of about:config

May 29, 2007 (Computerworld) Ever since its debut, Firefox has garnered a reputation for being an enormously customizable program, both through its add-on architecture and its internal settings. In fact, many of Firefox's settings aren't exposed through the Tools > Options menu; the only way to change them is to edit them manually. In this article, we'll explore some of the most useful Firefox settings that you can change on your own, and that aren't normally available through the program's graphical interface.

The closest analogy to how Firefox manages its internal settings is the Windows Registry. Each setting, or preference, is given a name and stored as a string (text), integer (number) or Boolean (true/false) value. However, Firefox doesn't keep its settings in the registry, but in a file called prefs.js. You can edit prefs.js directly, but it's often easier to change the settings through the browser window.

Type about:config in the address bar and press Enter, and you'll see all the settings currently enumerated in prefs.js, listed in alphabetical order. To narrow down the hundreds of configuration preferences to just the few you need, type a search term into the Filter: bar. (Click the Show All button or just clear the Filter: bar to get the full list back again.)

To edit a preference, double-click on the name and you'll be prompted for the new value. If you double-click on an entry that has a Boolean value, it'll just switch from true to false or vice versa; double-click again to revert to the original setting. Not all changes take effect immediately, so if you want to be absolutely certain a given change is in effect, be sure to close and reopen Firefox after making a change.

Note that not every setting in about:config exists by default. Some of them have to be created manually. If you want to add a new preference, right-click somewhere on the page and select New, then select the type of item to create (String, Integer or Boolean) and supply the name and value.

Before you begin

Here are a few caveats to keep in mind as you explore and tweak:

Not everyone will get the same benefits by enabling these tweaks. This is especially true for changing the network settings. If you habitually visit sites that don't allow a large number of connections per client, for instance, you won't see much benefit from raising the number of connections per server.

Some hacks may have a limited shelf life. With each successive release of Firefox, the need for tweaking any of the performance-related config settings (like the network settings) may dwindle as Firefox becomes more self-tuning based on feedback from real-world usage scenarios. In short, what works now may not always work in the future -- and that might not be a bad thing.

Keep a log of everything you change, or make backups. If you tweak something now and notice bizarre activity in a week, you'll want to be able to track back to what was altered and undo it. Firefox does show which about:config changes have been set manually, but this isn't always the most accurate way to find out what you changed.

To make a backup of your preferences in Firefox, just make a copy of the file prefs.js, which is kept in your Firefox profile folder. If you mess something up, you can always copy this file back in. (Be sure to shut down Firefox before making a copy of prefs.js or moving a copy back into the profile folder!)

In Windows XP, the profile folder is
\Documents and Settings\\Application Data\Mozilla\Firefox\Profiles\.default\

In Windows Vista, this folder is
\Users\\AppData\Roaming\Mozilla\Firefox\Profiles\.default\

Note that Application Data and AppData are hidden folders by default, so they may not show up unless you force Explorer to show hidden objects. (Open the Control Panel, double-click Folder Options, select the View tab, select "Show hidden files and folders" and click OK.)

In Mac OS X, the profile folder is
/Library/Application Support/Firefox/Profiles/.default/

and in Linux it's
~/.mozilla/firefox/.default/

but on those platforms it's usually quicker simply to search for prefs.js.

Alternatively, you can use the handy Firefox Extension Backup Extension (FEBE). It backs up not only the prefs.js file but just about every other thing in Firefox -- extensions, themes, cookies, form history and so on.

Speed up page display

Some of the more recent Firefox customizations I've examined are ways to speed up the rendering of Web pages. The settings to do this are a little arcane and not terribly self-explanatory, but with a little tinkering, you can often get pages to pop up faster and waste less time redrawing themselves.

Start rendering pages faster
Creating an nglayout.initialpaint.delay integer preference lets you control how long Firefox waits before starting to render a page. If this value isn't set, Firefox defaults to 250 milliseconds, or .25 of a second. Some people report that setting it to 0 -- i.e., forcing Firefox to begin rendering immediately -- causes almost all pages to show up faster. Values as high as 50 are also pretty snappy.

Reduce the number of reflows
When Firefox is actively loading a page, it periodically reformats or "reflows" the page as it loads, based on what data has been received. Create a content.notify.interval integer preference to control the minimum number of microseconds (millionths of a second) that elapse between reflows. If it's not explicitly set, it defaults to 120000 (.12 of a second).

Too many reflows may make the browser feel sluggish, so you can increase the interval between reflows by raising this to 500000 (500,000, or 1/2 second) or even to 1000000 (1 million, or 1 second). If you set this value, be sure to also create a Boolean value called content.notify.ontimer and set it to true.

Control Firefox's 'unresponsive' time
When rendering a page, Firefox periodically runs a little faster internally to speed up the rendering process (a method Mozilla calls "tokenizing"), but at the expense of being unresponsive to user input for that length of time. If you want to set the maximum length of time any one of these unresponsive periods can be, create an integer preference called content.max.tokenizing.time.

Set this to a multiple of content.notify.interval's value, or even the same value (but higher is probably better). If you set this to something lower than content.notify.interval, the browser may respond more often to user input while pages are being rendered, but the page itself will render that much more slowly.

If you set a value for content.max.tokenizing.time, you also need to create two more Boolean values -- content.notify.ontimer and content.interrupt.parsing -- and set them both to true.

Control Firefox's 'highly responsive' time
If Firefox is rendering a page and the user performs some kind of command, like scrolling through a still-loading page, Firefox will remain more responsive to user input for a period of time. To control how long this interval is, create an integer preference called content.switch.threshold.

This is normally triple the value of content.notify.interval, but I typically set it to be the same as that value. Set it to something very low -- say, 10000 (10,000 microseconds) -- and the browser may not respond as snappily, but it may cause the rendering to complete more quickly.

If you haven't already created the Boolean values content.notify.ontimer and content.interrupt.parsing and set them both to true in conjunction with content.max.tokenizing.time, you'll need to do so to make content.switch.threshold work properly.

If you are more inclined to wait for a page to finish loading before attempting to do anything with it (like scroll through it), you can set content.max.tokenizing.time to a higher value and content.switch.threshold to a lower value to allow Firefox to finish rendering a page faster at the expense of processing user commands. On the other hand, if you're the kind of person who likes to scroll through a page and start reading it before it's done loading, you can set content.max.tokenizing.time to a lower value and content.switch.threshold to a higher one, to give you back that much more responsiveness at the cost of page-rendering speed.

Have tabbed browsing your way

Right from the start, one of Firefox's strengths has been tabbed browsing. But if the tabs don't behave quite the way you want them to by default, or you hate the way the default behaviors have changed since Firefox 1.x, the following changes will bring them in line.

Corral close buttons
The integer preference browser.tabs.closeButtons controls how the close buttons (the "X" icons) are rendered on tabs:

0: Display a close button only on the currently active tab. This is a nice way to keep from accidentally smacking into a close button for the wrong tab.

(You can press Ctrl-F4 to close only the current tab, but many mouse-centric people never bother to do this.)

1: Display close buttons on all tabs (default).

2: Don't display any close buttons; the only way to close a tab is by pressing Ctrl-F4.

3: Display one close button at the end of the tab bar (Firefox 1.x's default).

Open search results in a new tab

This one is a favorite of mine. When browser.search.openintab (a Boolean preference) is set to true, any searches launched from the Search tool bar are opened in a new tab instead of overwriting the contents of the current one. I can't tell you the number of times I mistakenly wiped out my current page before I started using this.

Note that if you launch a new browser window with Ctrl-N and perform a search there, you'll see the search results and the default home page for the new browser instance loading in separate tabs.

Open bookmark groups in new tabs
If you open a group of bookmarks at once, Firefox's default behavior is to replace any existing tabs with the newly opened pages. Set browser.tabs.loadFolderAndReplace (Boolean) to false, and opening groups of bookmarks will append new tabs to the existing window instead of overwriting existing ones.

Squeeze more tabs into the tab bar
The integer preference browser.tabs.tabMinWidth controls how narrow, in pixels, tabs can be shrunk down before scroll arrows appear on the left and right edges of the tab bar.

The default is 100, but you can set this to something smaller so you can fit more tabs in the bar at once. Note, however, that you might find the shortened titles harder to read.

In the same vein, the integer preference browser.tabs.tabClipWidth sets the minimum width, in pixels, that a tab must be in order to show a close button. This is 140 by default, so if you set this to something lower, you'll see more tabs with close buttons when the tab bar is heavily populated.

Make the user interface behave

Another big reason people hack Firefox's settings is to modify the user interface -- either to make it a little easier to do something, or to revert to a behavior that was prevalent in Version 1.x but changed in 2.0.

Get case-sensitive, in-page searches
The integer preference accessibility.typeaheadfind.casesensitive controls how Firefox's "Find as You Type" feature behaves. The default is 0 for case-insensitive searches; set it to 1 for case-sensitive matching.

Control address bar searches
You may have noticed that if you type something into Firefox's address bar that's not an address (a "keyword"), Firefox typically passes it on to Google as an "I'm Feeling Lucky" search term. The exact search engine string to use is defined in the string preference keyword.URL; if you want to change it to something else, you can simply edit this string.

For instance, to make Microsoft's Live.com the default keyword search, set this string to
http://search.live.com/results.aspx?q=

For a Yahoo search, it would be
http://search.yahoo.com/search?p=

If you want to restore the default search, use
http://www.google.com/search?ie=UTF-8&oe=
UTF-8&sourceid=navclient&gfns=1&q=

Finally, if you want to turn this address-bar keyword functionality off altogether, set the Boolean preference keyword.enabled to false.

Note that with Google, the more generic the keyword, the less likely it is to be used as an "I'm Feeling Lucky" search -- although what constitutes "generic" isn't always clear. For instance, typing "clean" into the address bar returns a generic Google search page, but "sideways" takes me to the Internet Movie Database entry for the movie of that name (the "I'm Feeling Lucky" result). Your mileage will almost certainly vary.

Select just a word
The Boolean preference layout.word_select.eat_space_to_next_word governs one of Firefox's tiny, but for me incredibly annoying, little behaviors. When you double-click on a word in a Web page to select it, Firefox automatically includes the space after the word. Most of the time I don't want that; I just want the selection to stop at the end of the word. Setting this to false will defeat that behavior.

Select a word and its punctuation
Somewhat contrarily, if you double-click a word that's next to any kind of punctuation mark, Firefox defaults to selecting only the word itself, not its adjacent punctuation. Set the Boolean preference layout.word_select.stop_at_punctuation to false to select the word and its adjacent punctuation.

Get Alt-hotkey shortcuts back
One minor change in Firefox 2 was the way in which form elements on a Web page had hotkey bindings assigned to them. In Firefox 1.x, when a Web page assigned a hotkey to a form element, you pressed Alt-hotkey to access it. In Version 2.x, this was changed to Alt-Shift-hotkey. To revert to the original 1.x behavior, set the integer preference ui.key.contentAccess to 4. This is useful if you have, for instance, a Web-based interface you spend a lot of time in, and use Alt-key bindings to do things quickly in that particular page.

Note that one possible consequence of setting this back to the old behavior is that Alt-key bindings on a Web page can now override the default key sequences for the program itself (such as Alt-S for History), but you can always get around this by tapping Alt to activate the menu and then tapping the program hotkey in question.

Change scrollbar behavior
By default, clicking in the empty areas of the Firefox window's scrollbar will simply cause the view to move up or down one page. You can change this behavior by creating a Boolean preference called ui.scrollToClick and setting its value to true. Now clicking in a scrollbar will cause the view to jump directly to that point in the page (basically the same as dragging the scrollbar to that position).

Get click-and-hold context menus back (for Macs only)
If you want to restore the classic click-and-hold context-menu behavior on the Macintosh, edit or create the Boolean preference ui.click_hold_context_menus and set it to true.

Hack network connections

The very first batch of Firefox hacks I learned about was how to override its network defaults. Some of Firefox's out-of-the-box settings for how it deals with network connections are fairly conservative, probably because Firefox has no way of knowing what kind of network it's using (dial-up vs. broadband, etc.). If you have a network that readily supports multiple simultaneous connections, you can make a number of changes to Firefox to take advantage of that.

But proceed with caution. If Firefox's network settings are set too aggressively, they can lead you to being blacklisted for a short time by a given remote server. And you should certainly get permission from the IT department before attempting this kind of hack in a corporate environment. Regardless, moderation is the key. For the most part, I find that setting the network settings to absurdly high numbers does not accomplish much of anything; it helps to ramp them up a bit, but generally not much more than that.

Maximize connections to multiple servers
The integer preference network.http.max-connections controls how many simultaneous network connections Firefox will make at any one time to any number of Web servers. One typical way this pays off is if you have Firefox set to load multiple home pages in different tabs at once, or if you access pages that aggregate contents from several different servers (for instance, multiple advertising systems).

By default, this is set to 24, which should work well for most network connections, but you can raise it to 32 and see if that has any effect. (I've seen people raise this as high as 64, but anything above 32 doesn't seem to provide much discernible payoff.)

Maximize connections to the same server
The integer preference network.http.max-connections-per-server controls how many separate connections Firefox makes to the same server, which allows multiple elements in a page to be downloaded in parallel. Normally, this is set to 8, but some people choose to set it as high as 16.

Note, however, that some Web servers will block you if you try to establish more than 8 inbound connections, typically as a bandwidth-protection or antileeching measure -- this is the kind of behavior also exhibited by download managers that try to use as many "slots" as possible to speed things up, and many server admins hate that sort of thing. Also, if you're on a connection that's not fast to begin with (e.g., slow ISDN or dial-up), changing this setting will have no discernible effect, and may in fact slow things down.

Bump up persistent connections per server
Firefox keeps persistent connections to a server "alive" to improve performance: Instead of simply sending the results of one request and then closing, they're held open so that multiple requests can pass back and forth. This means a little less network traffic overall, since a connection to a given server has to be set up only once, instead of once for each separate piece of content; it also means successive connections to the same server go through faster.

The integer preference network.http.max-persistent-connections-per-server controls the number of persistent connections allowed per server. By default, this is set to 2, although some servers will honor a higher number of persistent connections (for instance, if there's a lot of content from their site that loads in parallel, like images or the contents of frames). You probably only want to go as high as 8 with this; more than that may cause a server to temporarily blacklist your IP address depending on how it's configured. (If you're going through a proxy defined by Firefox, use network.http.max-persistent-connections-per-proxy instead of this setting.)

Reduce the interval between persistent connections
If you've already used up all the persistent server connections described in the above setting and Firefox needs to make more connections, the integer setting network.http.request.max-start-delay governs how long to wait before attempting to open new connections. This helps if Firefox's persistent-connection limit has been used up by a number of long downloads, and the browser needs to queue a shorter download on top of that.

Most people set this to 0 (in seconds), with the default being 10. Note that this does not override connection limits imposed by remote hosts, so its usefulness is limited by the whim of the server you're connecting to.

Turn on pipelining
The Boolean preference network.http.pipelining enables an experimental acceleration technique called "pipelining," which speeds up the loading of most Web pages. A browser normally waits for some acknowledgment of a given request from a server before attempting to send another one to that server; pipelining sends multiple requests at once without waiting for responses one at a time.

If you turn this on (that is, set its value to true), also be sure to create or edit the integer preference network.http.pipelining.maxrequests, which controls the maximum number of requests that can be pipelined at once. 16 should do it; some people go as high as 128 but there's not much evidence it'll help. (If you use a proxy, set network.http.proxy.pipelining to true as well.)

Note that not every Web server honors pipelining requests correctly, which is why this feature is turned off by default and still considered experimental. Some sites may behave strangely if you submit pipelined requests.

Stop memory hogging

The default way the Windows version of Firefox consumes memory can be alarming if you don't know what's really going on. People routinely report a memory "footprint" of 75MB to 100MB or more with only a few windows or tabs open, and they assume a memory leak is to blame. While earlier versions of Firefox did have memory leak bugs, they're not the reason for this kind of memory consumption in Firefox 2.x.

Here's what's happening: Firefox caches recently used objects -- Web pages, images -- in memory so that they can be re-rendered on-screen quickly, which drives up memory usage. The following tweaks can make Firefox stake out memory less aggressively. (Note, however, that lightening the memory load might make your pages load a bit more slowly than you're used to.)

Reduce graphics caching
When the Boolean preference browser.cache.memory.enable is enabled (the default), Firefox keeps copies of all graphical elements from the current browsing session in memory for faster rendering. You can set this to false to free up more memory, but pages in your history will reload less quickly when you revisit them.

Another option: Set the value to true and create a new integer preference called browser.cache.memory.capacity. Then specify, in kilobytes, how much memory to set aside for graphics caching. That way you get some of the speed benefits that graphics caching provides without taking a huge memory hit. If you use -1 as the memory value, Firefox will size the memory cache based on how much physical RAM is present.

Reduce Web page caching
Firefox caches several recently visited Web pages in memory so they don't have to be regenerated when you press Back or Forward. The integer setting browser.sessionhistory.max_total_viewers determines how many individual Web pages to store in the back/forward cache; each page takes about 4MB (or 4,000KB) of RAM.

By default, however, this value is set to -1, which determines how many pages to cache from the amount of available physical memory; the maximum number of pages stored when you use -1 is 8. Set this value to 0 to disable page caching entirely. That will save some memory, but will also cause Back and Forward navigation to slow down a bit.

Note that this caching is not the same as browser.cache.memory.enable: That setting is for rendering elements on pages like graphics and buttons, and the contents of https-encoded pages, while this setting is for caching the text content of Web pages that have already been rendered or "tokenized."

Swap out to disk memory when minimized (Windows only)
A little-known feature in Firefox allows the Windows memory manager to swap out some of Firefox's physical memory space to disk when Firefox is minimized but not closed. This allows other programs to use the physical memory that Firefox was previously monopolizing.

By default, this feature is turned off, for two reasons: 1) PC memory is generally more plentiful than it used to be, so it makes sense to use it if it's available, and 2) swapping Firefox's memory out to disk will slow the program down when it's restored.

That said, if you run Firefox side by side with other memory-hungry applications, it might help keep them from competing with each other. To enable this feature, create a new Boolean preference called config.trim_on_minimize and set its value to true.

Got your own about:config tweaks to share? Add them to the Comments area at the bottom of the page. If you've got the itch to learn more about about:config settings, MozillaZine's about:config entries wiki is a great source of information.

Serdar Yegulalp writes about Windows and related technologies for a number of different publications, including his own Windows Insight blog.



Mar 12, 2007

Paranoid Browsing with Squid

From http://outflux.net/blog/archives/2006/12/07/paranoid-browsing-with-squid/

As Carthik says, the SSH SOCKS option is a great way to quickly tunnel your web traffic. A word of caution for the deeply paranoid: all your DNS traffic is still in the clear. While the web traffic and URLs aren’t sniffable any more, curious people can still get a sense for what kinds of stuff you’re browsing, based on domain names. (And for the really really paranoid: if you’re on open wireless, your DNS lookups could get hijacked, causing you to browse to look-alike sites ready to phish your login credentials.)

Luckily, with SOCKS5 Firefox can control which side of the proxy handles DNS lookups. By default, it does the lookups locally resulting in the scenario above. To change this, set network.proxy.socks_remote_dns = true in about:config. This makes the SOCKS proxy more like a regular proxy, where DNS is handled by the remote end of the tunnel.


Mar 8, 2007

120 Days Vista Free

Official Way to Install and Use Windows Vista without Activation for Free for 120 Days

By default, Windows Vista can be installed, used and run without any license, product key or the need of activation for 30 days grace period, for purpose of trial or evaluation. Although Microsoft initially stressed that users should purchase a license with valid product key before the trial period expires, or else Windows Vista will lock into Reduced Functionality Mode. However, a “rearm” method has long since been discovered to be able to extend, or reset the remaining time for activation to another fresh 30 days, for up to 3 times.

Now Microsoft has confirmed that Windows Vista can be used and run for 120 days or 4 months (3 rearms), and extending the activation grace period is not a violation of the Vista End User License Agreement (EULA). All versions of Vista include Windows Vista Ultimate allow a 30-day free period without activation, except the corporate-oriented Vista Enterprise, which supports only a three-day trial.

To extend, reset or restart the initial OOB grace period of Windows Vista to another 30 days, use the following steps:
  1. Click on Vista Start button and key in Cmd in Start Search box.
  2. Press on Ctrl-Shift-Enter to open Command Prompt with administrative credentials (equivalent to “Run as Administrator”).
  3. In the Command Prompt, type the following command and press Enter when done: slmgr -rearm or you can use sysprep /generalize
  4. Reboot the computer.
  5. Rearm again when the remaining activation grace period timer counts down to 0 days.
Rearm option resets the computer’s activation timer and reinitializes some activation parameters.

Feb 25, 2007

Cmd Prompt From Here

You can create a text file named anything.reg, and insert this text into it:

Windows Registry Editor Version 5.00
[HKEY_CLASSES_ROOT\Directory\shell\CommandPrompt]
@=”Command Prompt:”
[HKEY_CLASSES_ROOT\Directory\shell\CommandPrompt\Command]
@=”cmd.exe /k cd %1”

Double click on that file, and the text will be entered into the registry, and you’ll have the same right click command prompt.

Clear IE7 Browsing History

If you like to build batch files to automate cleanup on your computer, you’ll probably want to include at least one of these commands in your batch script. You can automate any one of the functions on the Internet Explorer 7 Delete Browsing History dialog.

And here’s the commands that correspond to the different buttons. The most important one from a cleanup perspective is the first, which will delete just the temporary internet files that are cluttering up your computer.

To use these commands, just run them from the command line, the start menu search box in vista, or a batch file.

Temporary Internet Files

RunDll32.exe InetCpl.cpl,ClearMyTracksByProcess 8

Cookies

RunDll32.exe InetCpl.cpl,ClearMyTracksByProcess 2

History

RunDll32.exe InetCpl.cpl,ClearMyTracksByProcess 1

Form Data

RunDll32.exe InetCpl.cpl,ClearMyTracksByProcess 16

Passwords

RunDll32.exe InetCpl.cpl,ClearMyTracksByProcess 32

Delete All

RunDll32.exe InetCpl.cpl,ClearMyTracksByProcess 255

Delete All - “Also delete files and settings stored by add-ons”

RunDll32.exe InetCpl.cpl,ClearMyTracksByProcess 4351

These commands should work in Internet Explorer 7 on XP or on Windows Vista.

Delete System Files

Warning: Do not delete system files. Bad things will probably ensue.

If you need to delete or overwrite a system file in Windows Vista, you’ll quickly notice that you cannot delete system files, even as administrator. This is because Windows Vista’s system files are owned by the TrustedInstaller service by default, and Windows File Protection will keep them from being overwritten.

Thankfully, there’s a way that you can get around this. You need to take ownership of the files, and then assign yourself rights to delete or modify the file. For this, we’ll use the command line.

Open an administrator command prompt by typing cmd into the start menu search box, and hit the Ctrl+Shift+Enter key combination.

To take ownership of the file, you’ll need to use the takeown command. Here’s an example:

takeown /f C:\Windows\System32\en-US\winload.exe.mui

That will give you ownership of the file, but you still have no rights to delete it. Now you can run the cacls command to give yourself full control rights to the file:

cacls C:\Windows\System32\en-US\winload.exe.mui /G Administrator:F

Note that my username is Administrator, so you will substitute your username there.

At this point, you should be able to delete the file. If you still can’t do so, you may need to reboot into Safe Mode and try it again. For the filename in the example, I was able to overwrite it without safe mode, but your mileage may vary.

Feb 12, 2007

Start with Specific CPU

Windows Vista has an option that lets you start an application and set the CPU affinity, which assigns the application to run on a specific CPU in a dual-core system.

To start an application you have to pass the affinity flag to the start utility in the command prompt. For instance, if you wanted to start notepad assigned to CPU, you could use the following command:

c:\windows\system32\cmd.exe /C start /affinity 1 notepad.exe

You can see in task manager that the process is only assigned to CPU 0.


To start a process on CPU 0, use the following command switch:

/affinity 1

For CPU 1, use this switch:

/affinity 2

You can use a number up to the number of CPU cores or CPUs in your system. The affinity is essentially CPU core # + 1, so /affinity 5 would use CPU 4.

You can almost modify the shortcut for an item to make it run on the specific CPU, by just prepending the full “c:\windows\system32\cmd.exe /C start /affinity 1 ” onto the shortcut target. The only drawback to this approach is that the command prompt window will briefly flash on the screen.



Enable or Disable UAC

UAC stands for User Account Control. It's just similar to "sudo" in Linux; Microsoft copies the idea from UNIX world and create a GUI for it.

Here's the quick ways to enable or disable the UAC using command line or GUI.

Disable UAC (command line)
C:\Windows\System32\cmd.exe /k %windir%\System32\reg.exe ADD HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System /v EnableLUA /t REG_DWORD /d 0 /f


Disable UAC (mouse)
  • Open up Control Panel, type in "user account" in the search box.
  • See the link for "Turn User Account Control (UAC) on or off" and click it.
  • Uncheck the box, and reboot your computer. You should be done with obnoxious prompts!
Enable UAC (command line)
C:\Windows\System32\cmd.exe /k %windir%\System32\reg.exe ADD HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System /v EnableLUA /t REG_DWORD /d 1 /f

Enable Ctrl+Alt+Del for Vista Logon Screen

You had to use the Ctrl+Alt+Del combination to login to the system in all the previous versions of Windows. This was supposed to provide a higher security login.

This "feature" has been disabled in Vista by default. You can turn it back on if you wish.
  • Open the Advanced User Accounts panel by typing in netplwiz into the start menu search box, and hitting enter.
  • Then click the Advanced tab, and you’ll see this:

  • Just check the box, and the next time you login, you’ll see the old familiar prompt.

Enable Run Command

The simplest way to do so inlcudes:
  • Hitting Win + R on keyboard
  • Otherwise, you can re-enable the run dialog by right-clicking on the Start Button, selecting Properties, and then clicking Customize on the ensuing dialog window. You’ll be taken to the Customize Start Menu screen. Check the “Run command” checkbox in the list, and you should now be in business.

0day in Solaris 10 and 11 Telnet

From SANS: Another good reason to stop using telnet

Published: 2007-02-11,
Last Updated: 2007-02-11 23:07:07 UTC
by donald smith (Version: 1)

There is a major zero day bug announced in solaris 10 and 11 with the telnet and login combination.
It has been verified. In my opinion NOBODY be should running telnet open to the internet.

The issue:
The telnet daemon passes switches directly to the login process which looks for a switch that allows root to login to any account without a password. If your telnet daemon is running as root it allows unauthenticated remote logins.

Telnet should be disabled. Since 1994 the cert.org team has recommended using something other then plain text authentication due to potential network monitoring attacks. http://www.cert.org/advisories/CA-1994-01.html
“We recognize that the only effective long-term solution to prevent these attacks is by not transmitting reusable clear-text passwords on the network.“

If remote shell access is required ssh is a better choice then telnet. We have done articles about securing ssh in the past. http://isc.sans.org/diary.html?storyid=1541

The FIX:
To disable telnet in solaris 10 or 11 this command should work.
svcadm disable telnet

The Mitigations:
Limit your exposure if you must run telnet on your solaris system it is recommend that you use firewall(s) to limit what IP can connect to your telnet services.

Another mitigation that works is this:
inetadm -m svc:/network/telnet:default exec="/usr/sbin/in.telnetd -a user"

I am not going to include the site with the exploit. No special tools are required to exploit this vulnerability.

Thanks to Chris and Thomas who notified us of this issue and all the fellow handlers that helped verify, mitigate and review this report.

From SecuriTeam: Solaris Telnet 0day or Embarrassment

Johannes Ullrich from the SANS ISC sent this to me and then I saw it on the DSHIELD list:

If you run Solaris, please check if you got telnet enabled NOW. If you
can, block port 23 at your perimeter. There is a fairly trivial Solaris
telnet 0-day.

telnet -l “-froot” [hostname]

will give you root on many Solaris systems with default installs
We are still testing. Please use our contact form at
https://isc.sans.org/contact.html
if you have any details about the use of this exploit.

You mean they still use telnet?!

Gadi Evron,
ge@linuxbox.org.



Hidden Boot Screen in Vista

from the How-To Geek

The Windows Vista BootScreen is pointless, but Microsoft decided to hide a more visually appealing boot screen that can easily be enabled with very little trouble. I’m not sure why they didn’t make the boot screen better.

All you have to do is type msconfig into the start menu search box, and hit enter.

Click the Boot tab, and then check the “No GUI boot” checkbox.

Hit OK and reboot the computer. You should see the new boot screen immediately.

Gmail's Philosophy

From Gmail's Philosophy Today

Google approach to mail, Gmail, was launched in April 1st 2004 as an invitation-only system. People initially thought Gmail was Google's Aprill Fools Day joke, but it turned out that Gmail was real.

What set Gmail apart?
  • Don't throw anything away
    Gmail had a storage size of 1 GB, 250 times bigger than Yahoo Mail's storage. Google thought people won't need to delete messages anymore, so Gmail didn't include a Delete button. But users really wanted to delete unnecessary messages, so Google had to add add the Delete button (January 2006).

  • Search, don't sort
    A such a big storage required a good search engine. Google indexed the full text of the messages, so you can search it throughly. There's also an advanced search that allows you to search for a certain sender or a time interval. But many users want a way to sort messages: for example, it would be nice to sort the messages by size or by sender.

  • Keep it all in context
    Google thought it would be nice to display all the replies to a message in a thread, like in a message board. Gmail does that by looking at the subject, so if someone changes the subject, the reply is not included in the thread. While many users agree it's a better way to handle an email exchange as a conversation, there are people who think each message should treated independently.

  • No pop-up ads. No untargeted banners
    Gmail shows text ads related to the current message. In 2004, when Gmail was launched, uninformed people spreaded the idea that Gmail breaks users' privacy by scanning the full text of messages to display ads. As Tim O'Reilly reported, "a number of organizations have asked Google to voluntarily suspend the service. One California legislator has gone so far as to say she plans to introduce a bill to ban it." As people got invitations to Gmail, they realized Google's system is better: mail scanning is automated and Gmail displays unobtrusive and sometimes even useful ads.

  • Labels, not folders
    Instead of storing message in separate folders, you can attach one or more labels that describe its content. Filters help you do that automatically. But there are many people that want folders: that's why Yahoo Mail and Windows Live (Hot)mail chose to stick with folders.

Gmail's philosophy was to remove as many constraints as possible and to have a flexible way to organize your mail. But when you try to be free of constraints, you impose a new rule and users should abide by it. People will always want to delete their messages, to see the first message received from aunt Lilly, to move it to a specific container like they do with their files (even if you can do this in Gmail by labeling a message and then archiving it). Messages from Gmail's Group confirm that:

"If I could sort by sender, then it would be much easier to find all of the emails from a certain group, individual, mailing list, company. Searching is great, it has tons of usefulness, but it does NOT replace sorting. It can be more cumbersome in many instances, no matter how well you refine it."

"I understand that some of the developers of Gmail feel that conversations are fundamental to the Gmail experience. But by not offering the option to disable it, you really are forcing many of your users to interact with their email in a way that they would prefer not to. Where is the choice? Of course I can set up my account to pop all of the mail to Outlook Express or some variant. But that removes me from the otherwise excellent Gmail experience, which I certainly do not want to do."

Jan 18, 2007

The 7 Law of Identity

Microsoft has proposed architectural principles ("7 Laws of Identity") to support convergence towards an inter-operable, secure, and privacy-enhancing plurality of identity systems - an "Identity Metasystem". This new concept presupposes that a single monolithic identity system for the Internet is neither practicable nor desirable.

The ability of Internet users to manage identity relationships with diverse organisations is a prerequisite to further development of e-commerce and efficient delivery of government services online. However a rising tide of information security threats, from “phishing” and “spoofing” attacks on the user, to large scale breaches of centralised repositories of identity information, suggests that new approaches are needed which can empower the individual to take more control of how their personal information is used online. For a number of years there has been growing interest in industry and research communities in the concept of "user-centric" identity management systems. Microsoft has proposed architectural principles ("7 Laws of Identity") to support convergence towards an inter-operable, secure, and privacy-enhancing "Identity Metasystem". This new concept presupposes that a single monolithic identity system for the Internet is neither practicable nor desirable. What are the implications for security and privacy of offering individuals greater transparency over how their data is used, and how can this best be achieved?

The 7 Laws of Identity
======================
  1. User Control and Consent - Technical identity systems must only reveal information identifying a user with the user’s consent.
  2. Minimal Disclosure for a Constrained Use - The solution that discloses the least amount of identifying information and best limits its use is the most stable long-term solution.
  3. Justifiable Parties - Digital identity systems must be designed so the disclosure of identifying information is limited to parties having a necessary and justifiable place in a given identity relationship.
  4. Directed Identity - A universal identity system must support both “omni-directional” identifiers for use by public entities and “unidirectional” identifiers for use by private entities, thus facilitating discovery while preventing unnecessary release of correlation handles.
  5. Pluralism of Operators and Technologies - A universal identity metasystem system must channel and enable the inter-working of multiple identity technologies run by multiple identity providers.
  6. Human Integration - The universal identity metasystem must define the human user to be a component of the distributed system integrated through unambiguous human-machine communication mechanisms offering protection against identity attacks.
  7. Consistent Experience Across Contexts - The unifying identity metasystem must guarantee its users a simple, consistent experience while enabling separation of contexts through multiple operators and technologies.
Links

Jan 15, 2007

Security in SDLC

The software development life cycle

The software development life cycle, or SDLC, encompasses all of the steps that an organization follows when it develops software tools or applications. Organizations that incorporate security in the SDLC benefit from products and applications that are secure by design. Those that fail to involve information security in the life cycle pay the price in the form of costly and disruptive events.

In an organization that's been around for several years or more, the SDLC is well-documented and usually includes the steps that are followed and in what order, the business functions and/or individuals responsible for carrying out the steps and information about where records are kept.

A typical SDLC model contains the following main functions:

  • Conceptual definition. This is a basic description of the new product or program being developed, so that anyone reading it can understand the proposed project.
  • Functional requirements and specifications. This is a list of requirements and specifications from a business function perspective.
  • Technical requirements and specifications. This is a detailed description of technical requirements and specifications in technical terms.
  • Design. This is where the formal detailed design of the product or program is developed.
  • Coding. The actual development of software.
  • Test. This is the formal testing phase.
  • Implementation. This is where the software or product is installed in production.
Each major function consists of several tasks, perhaps documented in flowchart notation with inputs, outputs, reports, decisions and approvals. Some companies build workflow applications to support all of this.

Getting the right security information to the right people

Many people in the entire development process need all kinds of information, including security information, in a form that is useful to them. Here is the type of information that is required during each phase of the SDLC.
  • Conceptual -- Organization information security principles and strategies
  • Functional requirements and specifications -- Information security requirements
  • Technical requirements and specifications -- Information security requirements
  • Design -- Enterprise security architecture and security product standards
  • Coding -- Development standards, practices, libraries and coding examples
  • Testing -- Test plans that show how to verify each security requirement
  • Implementation -- Procedures for integrating existing authentication, access controls, encryption, backup, etc.
If you are wondering why maintenance is omitted from the life cycle example here, it is because maintenance is just an iteration of the life cycle: when a change is needed, the entire process starts all over again. All of the validations that are present the first time through the life cycle are needed every time thereafter.

Finally, one may say that these changes represent a lot of extra work in a development project. This is not the case – these additions do not present that much extra time. These are but small additions that reap large benefits later on.

Approval: Moving to the next step


Organizations that use a software development life cycle process usually have approval steps at each major function. This takes the form of some kind of an approval meeting with the right stakeholders present: generally you find managers, directors, occasionally a VP – the people who control budgets, resources and business priorities.

Someone who represents information security should be present and have the authority to vote at most, if not all, major steps in the life cycle. If someone representing infosec is not present at a life cycle approval meeting, then there is a risk that a project lacking some key security component will be approved, only to become a problem in the future.

Fix it now or pay the price later

Organizations that fail to involve information security in the life cycle will pay the price in the form of costly and disruptive events. Many bad things can happen to information systems that lack the required security interfaces and characteristics. Some examples include:
  • Orphan user accounts (still-active accounts that belong to employees or contractors who have left the organization) that exist because the information system does not integrate with an organization's identity management or single sign-on solution.
  • Defaced Web sites as a result of systems that were not built to security standards and, therefore, include easily exploited weaknesses.
  • Fraudulent transactions that occur because an application lacked adequate audit trails and/or the processes required to ensure they are examined and issues dealt with.
You should figure that problems like these are all costly to solve – in most cases far more costly than the little bit of extra effort required to build the products or applications correctly in the first place.