Antivirus Software To Become Mandatory?

A new report is recommending that vulnerable computers be barred from accessing the internet — but the idea risks creating more problems than it solves.

This article was originally published on 24 June 2010 on newmatilda.com.
Padlocked computer

Earlier this week, the Standing Committee on Communications tabled a report on its yearlong inquiry into cybercrime. The report, headed Hackers, Fraudsters and Botnets: Tackling the Problem of Cyber Crime makes 34 recommendations aimed at improving computer security in Australia. One of them in particular — a proposed industry code requiring Australians to install and maintain antivirus and firewall software to access the internet — has sparked some debate.

To assess the merits of that recommendation, it is necessary to understand how ISPs are presently (mostly not) regulated in the area of cyber security, and what exactly the report proposes to change.

The Internet Industry Association (IIA), a group representing ISPs, is largely responsible for writing the codes that regulate them. In relation to cyber security, the IIA recently released a voluntary code of practice titled icode. Among other things, this code lists a number of steps that ISPs may take when they become aware of malware-infected machines on their networks (such as notifying the user or disconnecting the user from the internet), but it leaves it up to the relevant ISP to decide which course of action is appropriate in the circumstances.

The current code is thus doubly voluntary. First, the code itself is voluntary, so ISPs can choose not to comply with it at all, and, second, ISPs that choose to comply with the code are not required to take any particular steps in relation to malware-infected machines on their network. That is, the current code does not provide for any mandatory steps to be taken in relation to malware-infected machines on ISPs’ networks. And in no way does it require users to install and maintain antivirus and firewall software.

The first thing that the new report proposes to change is to have an industry code that is registered. The Australian Communications and Media Authority (ACMA) presently has a power under the Telecommunications Act 1997 (Cth) to register industry codes that deal with certain things. Where such an industry code is registered, ACMA can direct an ISP to comply with the code. Failure to comply with such a direction exposes the ISP to a civil penalty of up to $250,000 per breach. A registered industry code is thus effectively mandatory.

Next, if the recommendations were adopted, ISPs would be required to take certain mandatory steps when malware-infected machines are found on their networks. Specifically, they would be required to notify the relevant users and implement graduated access restrictions (including disconnection) until the relevant machines are cleaned. Importantly, the report does not propose to require immediate disconnection of users whose machines are infected with malware, but rather a graduated response, where disconnection would presumably be the last step. This is important in particular because removal of malware often depends on the installation of up-to-date antivirus software, which is usually obtained online.

Most notably, though, the proposed code would require ISPs to include a contractual term in their acceptable use policies requiring users to install and maintain antivirus and firewall software before accessing the internet. It is this requirement that has raised the most eyebrows.

The most readily apparent problem with this recommendation is that enforcement would be impractical. The proposed code would require a new term in the contract between the ISP and the user, which could only be legally enforced by the ISP (and not, for example, by ACMA). It is not clear whether ISPs would be motivated to enforce these new contractual obligations. Most ISPs’ acceptable use policies currently prohibit the use of their services to infringe copyright, yet as the content industry will tell you, ISPs have not exactly been zealous in policing that part of their policies.

But even if the code required ISPs to actually enforce their contractual rights, for example by disconnecting users who did not comply, it would not be practical for ISPs to verify that their users have up-to-date antivirus and firewall software installed. Arguing that ISPs could manage this task, prominent cyber-security consultant Alastair MacGibbon has made the following point:

There is software available which could be on end-user machines that would allow my ISP, as I log in, to check that I have my firewall turned on, that I have an antivirus that they approve or recommend installed on my computer, and that my operating system and browser are patched — and if those things aren’t met then [my ISP would not] give me [access].

However, such software only works with certain antivirus and firewall products and only works on certain operating systems. And it would put ISPs in the position where they would have to approve particular antivirus and firewall software before users could use it, significantly limiting consumer choice. Approaching the issue of computer security this way appears to create more problems than it solves. Should ISPs be allowed — let alone forced — to dictate what antivirus and firewall products their users may use and what operating systems they may run? And should users be forced to install software from their ISPs that reports back what software they are running to their ISPs?

The other problem with the recommendation is that it is not clear what exactly users would be required to do to comply with these new contractual obligations. Would antivirus and firewall software need to be installed on all devices connected to the user’s network? Antivirus and firewall software for iPhones and iPads, for example, is not available or even possible presently. And there are many other devices for which such software is not as readily available as it is for Windows, including computers running Mac OS X and Linux (arguably because those devices do not need them to the same extent).

The question which then arises is whether any of this is really necessary. Most broadband connections are already provided using a modem-router that doubles as a firewall, and Windows itself (like most other operating systems) already includes a firewall that is on by default. While comprehensive antivirus software is not included with Windows itself (or most other operating systems), free solutions, including Microsoft Security Essentials are readily available. It is not clear how including a contractual term that most users will never read would be any more effective at encouraging use of appropriate security software than would educating users about the need for such software at the time they are provided with internet access (and perhaps via periodic reminders).

Notwithstanding the somewhat controversial recommendations discussed above, it is worth mentioning that the report does cover a lot of ground and makes many other good recommendations. They deal with three areas: aggregation and distribution of data about cybercrime, updating criminal and civil enforcement laws, and educating the public about computer security.

The report recommends setting up coordinated systems to gather and share information about cybercrime, with the aim of using that information to improve responses to online threats. Among other things, this would include developing a reporting system aimed at consumers and small and medium sized businesses, consisting of a centralised portal for reporting cybercrime (including malware, spam, phishing, scams, identity theft, and fraud) and a 24/7 reporting and helpline.

Criminal laws dealing with cybercrime would be reviewed and updated where necessary, and the Australian Consumer Law would be amended in two notable ways. First, consumers would gain a specific right to sue for unauthorised installation of software that monitors, collects, and discloses information about consumers’ activities (ie, spyware). Second, consumers would gain a right to sue a manufacturer for loss caused by a product that was released onto the Australian market with known security vulnerabilities.

Finally, and perhaps most importantly, the report sets out steps to improve community awareness of computer security issues. It does this in two ways. First, the report proposes a ‘public health style campaign’ to deliver messages about computer security issues as well as appropriate behaviours and technical precautions that users should take. Second, the report recommends specific changes to the law requiring, for example, the provision of security information about certain products (such as computers and routers) to users at the point of sale, and requiring also that certain products be designed to prompt and guide users to choose more secure settings (such as setting strong encryption on your wireless access point to secure your network).

While the report contains certain controversial recommendations, that’s normal for reports like this one. Meanwhile the many reasonable recommendations the committee makes — in particular the points about educating users — are a valuable contribution and deserve consideration.

Read full post »

Tags: antivirus, security

Google Is Watching

Google’s collection of information about Wi-Fi networks may not breach any laws but concerns loom over the company’s attitude to private data.

This article was originally published on 18 May 2010 on newmatilda.com.
Google is watching

Electronic Frontiers Australia and the Australian Privacy Foundation raised concerns last week about Google’s use of its Street View cars to collect identifying information about Wi-Fi networks for use in its geolocation service. While that identifying information is relatively harmless, Google has now admitted that it has accidentally collected data sent by users on unencrypted Wi-Fi networks too.

The first half of this story concerns the identifying information about Wi-Fi networks that Google was trying to collect. To explain the practice, we need to cover some basic Wi-Fi concepts.

Each Wi-Fi network is identified by a human-readable name called an SSID (like ‘My Wireless Network’) and a unique hexadecimal number which is usually assigned by the manufacturer of the Wi-Fi access point and called a BSSID or MAC address (like 00-17-9A-76-CB-A6).

Normally, a Wi-Fi access point will publicly broadcast its SSID and BSSID so that nearby computers can display the Wi-Fi network to users in a list of available networks, though most Wi-Fi access points allow you to disable this broadcast if you want.

In addition to that, the BSSID is always sent together with any data transmitted over the Wi-Fi network. Since multiple Wi-Fi networks can operate in the same space, devices connected to a Wi-Fi network need to be able to distinguish between data meant for their network and data meant for other nearby Wi-Fi networks. The devices do this by tagging transmitted data with the BSSID of the Wi-Fi network to which they are connected.

Finally, it is important that the SSID and BSSID are used in the way described above irrespective of whether the Wi-Fi network is secured with a password (WEP, WPA, or WPA2) or not.

It was Google’s collection of the SSIDs and BSSIDs of Wi-Fi networks around Australia that initially gave rise to privacy concerns last week. What Google did was mount Wi-Fi antennas to the roofs of the cars that drive around Australia taking photographs of the roadside for Google Maps Street View. As these cars mapped each city, they collected packets of data sent over nearby Wi-Fi networks. The idea was to take the SSIDs and BSSIDs from the collected packets of data, and to store them in a database together with the information about the location where the SSIDs and BSSIDs were seen.

Google could then use the collected information to provide a geolocation service to its users. The next time a user wanted to know his or her approximate location, he or she could send the SSIDs and BSSIDs of Wi-Fi networks that were nearby to Google. Google could then look up the SSIDs and BSSIDs in its database, retrieve the location where its Street View cars last saw those SSIDs and BSSIDs, and send that approximate location to the user.

In other words, Google’s geolocation service has the same function as GPS: it gives the user his or her location. However, whereas GPS uses the user’s distance from GPS satellites of known location to estimate the user’s location, Google’s geolocation service uses the distance from Wi-Fi networks of known location.

And there is nothing unique about Google’s geolocation service. There are many other geolocation providers that use Wi-Fi networks this way, such as Skyhook Wireless and Geomena.

Whether the practice poses privacy problems is a bit more complicated. In Australia, the principal privacy legislation is the Privacy Act 1988 (Cth), which regulates the collection, use, and disclosure of ‘personal information’. Personal information is defined as information about an individual whose identity is apparent or can be reasonably ascertained from that information.

Ordinarily, information about the location of a Wi-Fi network with a particular SSID or BSSID would not fall within this definition of personal information because it cannot readily be linked to an individual, although the position may be different with respect to Wi-Fi networks that use a surname or phone number as the SSID. It is because this information does not ordinarily identify an individual that its collection probably does not breach privacy laws, and does not pose a privacy problem for most people.

And it is for that reason that the common concern that you could be located using the information that Google collected about your Wi-Fi network is unfounded. Google does not store your details, it stores the SSID and BSSID of your Wi-Fi network. To get the location of your Wi-Fi network back from Google’s geolocation service, a person would have to supply, at the very least, your Wi-Fi network’s SSID and BSSID. It may be conceivable that such a person would guess the human-readable name or SSID that you have assigned to your Wi-Fi network, but he or she would not be able to guess the corresponding unique hexadecimal number or BSSID. The only way that the person could get that information would be to be within range of your Wi-Fi network, and at that point, the person would already know your approximate location.

Another concern — one with more merit — is that websites that you visit might know what Wi-Fi network you are connected to, or what Wi-Fi networks you are near, and then query Google’s geolocation service to find out your approximate location. The important thing here is that your browser does not send information about what Wi-Fi network you are connected to, or what Wi-Fi networks you are near, to the websites that you visit. Sites that you visit simply do not have access to it. The qualification here is that some browsers now have the ability to send information about your location to geolocation services. However such functionality works on an opt-in basis.

So that is the first half of the story. Things took a turn on Friday, however, when Google admitted that its Street View cars had collected not only SSIDs and BSSIDs as intended, but also some of the data that users sent over nearby unencrypted Wi-Fi networks. As its cars received packets of Wi-Fi data, rather than stripping the SSIDs and BSSIDs out of the packet and discarding the rest, the entire packet was saved and later stored on Google’s servers.

That means that if you were using an unencrypted Wi-Fi network as a Google Street View car drove past your house, a copy of whatever you were doing could have been collected and stored on Google’s servers together with your approximate location. Whether the data can identify you personally would depend on what you were doing at the time it was collected. If Google happened to come by your house as you were sending an email, then it may have collected personally identifiable information about you (the email together with the sender and recipient).

Collection of such data could very well breach the Privacy Act 1988 (Cth) or the Telecommunications (Interception and Access) Act 1979 (Cth), which prohibits the interception of communication, including email, passing over certain networks, including Wi-Fi networks. And quite irrespective of whether any law is breached, the practice is a cause for concern.

Google has explained that the collection of this additional data was a programming error. It maintains that it intended to collect and store only the SSIDs and BSSIDs of the Wi-Fi networks that its cars passed. And I have no doubt that that is true. The additional data is of minimal use to Google, and its deliberate collection would be an order of magnitude more irresponsible than what I would think Google could be.

However, that this additional data was collected in error does not make what happened here any more acceptable. This is the second time this year that Google has taken a cavalier attitude towards privacy.

In February, Google released Google Buzz, a Gmail-based social-networking tool. It quickly came to light that Buzz publicly disclosed the email addresses of people who Buzz users emailed most frequently, among other information, without seeking users’ specific consent first. Many users were caught off-guard when their data was unintentionally disclosed to other parties, like abusive ex-husbands.

Google has since corrected its problems with Buzz, but you cannot help but get a feeling of déjà vu as you read Google’s explanation of how it snared unencrypted Wi-Fi data. Google has now vowed to delete the collected data, and to submit itself to a third-party audit to verify that deletion — which was the right thing to do. And it has gone as far as to stop using Street View cars to collect Wi-Fi networking information altogether.

But in light of Google’s recent track record in safeguarding privacy, it would be wise for people to begin questioning what data they disclose to Google. Where people disclose data — whether by entering a search term in Google Search, sending email via Gmail, or broadcasting something as an SSID to the public — it is important that they understand how that data could be used, so that they question how that data is used.

Read full post »

Tags: geolocation, Google, Google Street View, Wi-Fi, Wi-Fi security

How to Get Rid of Temporary Posts Used for Theme Detection Permanently

Use a lightweight plugin to prevent Windows Live Writer from littering your WordPress blog with Temporary Posts Used For Theme Detection.

Windows Live Writer icon

Windows Live Writer is a neat tool for blogging. But its most useful feature, WYSIWYG editing, relies on an inelegant mechanism to detect the theme used by your blog.

To detect the theme, Windows Live Writer will publish a skeleton post to your blog, read it and save its theme, and then delete it. Sometimes the post isn’t deleted. Other times, it’s indexed by Google, FeedBurner, or other similar services before it’s deleted.

The result is an Internet littered with Temporary Posts Used For Theme Detection.

Obsessive compulsives like me don’t want these posts associated with their blogs. Thankfully, it turns out that you can write a plugin for WordPress to prevent these posts from ever appearing on your website.

Step-by-Step

Create a new text file called live-writer-helper.php. Copy and paste the below code into that text file, and save it.

<?php
/*
Plugin Name: Live Writer Helper
Plugin URI: https://chris.dziemborowicz.com/blog/2009/11/17/how-to-get-rid-of-temporary-posts-used-for-theme-detection-permanently/
Version: 1.1
Author: Chris Dziemborowicz
Author URI: https://chris.dziemborowicz.com/
Description: Prevents the Temporary Post Used For Theme Detection from ever appearing on your blog.
*/
function lwh_posts_where($where)
{
    if(!is_admin() && strpos($_SERVER['HTTP_USER_AGENT'], 'Windows Live Writer') === false)
    {
        if ($where != '')
            $where .= ' AND ';
        $where .= 'post_title NOT LIKE \'Temporary Post Used For % Detection (%)\'';
    }
    return $where;
}
add_filter('posts_where', 'lwh_posts_where');

Create a new directory called live-writer-helper in your wp-content/plugins directory, and upload live-writer-helper.php to the new directory. Finally, activate the plugin by logging into WordPress as an administrator, selecting Plugins from the menu, and selecting Activate for the Live Writer Helper plugin.

The plugin works by hiding any post where the title is ‘Temporary Post Used For * Detection (*)’ from all pages on your blog, as well as from your RSS feed. Users and bots accessing your site, or your RSS feed, won’t be able to see the temporary post.

You can still view the post when logged into the management interface (ie, when logged into wp-admin), so that you can delete the post if it hasn’t been deleted automatically. And, of course, Windows Live Writer can see it too, so that its theme detection engine continues to work.

Hopefully, this will end the flood of Temporary Posts Used For Theme Detection, at least on WordPress blogs.

Note: The present version of Windows Live Writer creates a post titled ‘Temporary Post Used For Theme Detection (*)’. Old versions created posts titled ‘Temporary Post Used For Style Detection (*)’.

If, in a future release, the title of the temporary post changes to something other than ‘Temporary Post Used For * Detection (*)’, you will need to update the code accordingly. (The % is a wildcard in the SQL WHERE clause.)

Read full post »

Tags: PHP, Windows Live Writer, WordPress, WordPress plugins

Get to Any Section on AustLII in One Step

If you’re using a browser that supports search keywords, you can add a keyword for your favourite Australian act.

The Australasian Legal Information Institute (AustLII) site is a great resource for Australian legislation. While far from perfect, it’s considerably more convenient than the government-run alternatives, at least when you just want to check a section quickly.

However, if you want to check a section, say section 52 of the Trade Practices Act 1974 (Cth), you have to go to AustLII, select Commonwealth from the menu on the left, find and select Commonwealth Consolidated Acts, select T, scroll through the list to find the Act, and, finally, scroll through the list of sections to locate the right section.

There is a better way:

Trade Practices Act 1974 (Cth) keyword in the address bar

If you’re using a browser that supports search keywords, like Firefox, Chrome, or Opera (or Internet Explorer with the right tool), you can add a keyword for your favourite act. For example, you can add a tpa keyword, so that when you type tpa 52 in the address bar, you’re taken directly to section 52 of the Trade Practices Act 1974 (Cth).

Add a Keyword for an Act

To set up a keyword for an act in Firefox, first find the act on AustLII and go to any section. Add that section to your bookmarks, and open the new bookmark’s properties (right-click on the bookmark, and select Properties).

The location for the bookmark will be something like …/tpa1974149/s52.html. You’ll need to change this, replacing the section number with %s, so that it looks like …/tpa1974149/s%s.html. The browser will replace the %s with whatever you type after the keyword in the address bar.

Finally, you’ll need to choose a keyword. This can be whatever you like. The finished bookmark should look something like this:

Trade Practices Act 1974 (Cth) keyword properties

Now, when you type tpa 52 in the address bar you’ll be taken directly to the correct section.

Things to Remember

Remember that the way this works is that the browser replaces the %s in the location for the bookmark with whatever you type after the search keyword. This has some consequences.

For example, even though section 51A of the Trade Practices Act 1974 (Cth) has a capital A, the address for that section is …/tpa1974149/s51a.html. A capital A won’t work, so you have to type tpa 52a.

Another example is the Income Tax Assessment Act 1997 (Cth). All of the sections in this act include an en-dash, like section 6–5. However, AustLII replaces the en-dash with a period, so that the address for section 6–5 is …/itaa1997240/s6.5.html. To use a keyword, you have to type itaa 6.5.

Advanced Keywords

Tax lawyers will be familiar with the two most fundamental tax acts: the Income Tax Assessment Act 1936 (Cth) and the Income Tax Assessment Act 1997 (Cth). Sometimes you need one, and sometimes you need the other. But it’s a pain to type itaa1997 6.5.

On AustLII, the address for every section in the Income Tax Assessment Act 1997 (Cth) has a period in it, and no address for any section in the Income Tax Assessment Act 1936 (Cth) does. So, we can use JavaScript to check whether the section typed after the keyword contains a period, and go to the right act accordingly.

To do that, replace the location in the relevant bookmark with the code below:

javascript:if("%s".indexOf(".")!=-1){location="http://www.austlii.edu.au/au/legis/cth/consol_act/itaa1997240/s%s.html";}else{location="http://www.austlii.edu.au/au/legis/cth/consol_act/itaa1936240/s%s.html";}

Make sure that all of the text is on one line and that there are no spaces.

Now, when you type itaa 6.5 you’ll be taken to section 6–5 of the Income Tax Assessment Act 1997 (Cth), but if you type itaa 65 you’ll be taken to section 65 of Income Tax Assessment Act 1936 (Cth).

Read full post »

Tags: AustLII, bookmarklets, legislation

ACMA Blacklists Iran Protest Video & Boing Boing

ACMA has blacklisted a video showing violence during the Iranian election protests, as well as a Boing Boing post commenting on it.

Censorship causes blindness. Can you see who is blinding you?

On 20 June 2009, a young woman, Neda Agha-Soltan, was shot and killed during the Iranian election protests. Her death was captured on video, and spread virally on the Internet, becoming a rallying cry for the Iranian protests.

Given the notorious attempts by the Iranian government to censor the protests, both online and in the media, I thought it would be fitting to test Senator Stephen Conroy’s assertions that the Government’s proposed mandatory Internet filter was unlike the censorship that occurs in Iran and under other undemocratic regimes.

I submitted the following to ACMA:

I am an Australian resident. I believe the content at the following links is prohibited content or potential prohibited content hosted outside Australia within the meaning of the Broadcasting Services Act 1992 (Cth).

[URL 1: Boing Boing post with embedded YouTube video showing the death of Neda Agha-Soltan and associated commentary.]

[URL 2: YouTube video showing the death of Neda Agha-Soltan.]

[URL 3: YouTube video showing another angle of the death of Neda Agha-Soltan.]

Each contains graphic video, apparently real, of a young girl shot in the chest and bleeding to death over the course of a couple of minutes.

The first link has no restrictions for viewing the video (but contains a textual warning). The second two links require registration and a declaration of date of birth (and also contain textual warnings).

The videos document the recent violence in Iran.

I have removed the URLs for legal reasons. If you haven’t already seen these videos, they’re easy enough to find (but be warned: they are graphic).

Today, 64 days later, I received a notice from ACMA confirming that the content was prohibited content.

As part of the ACMA’s investigation of the complaint, it applied to the Classification Board for classification of the content concerned. As a result of the Classification Board’s decision, and as the content is not subject to a restricted access system, it is prohibited content under clause 20(1)(b) of Schedule 7 to the Broadcasting Services Act 1992 (the Act).

The videos are certainly graphic, and I can see why there would be demand for a service that allowed people to avoid content such as this, if that is their individual choice.

However, under both the current and the proposed systems of Internet censorship in Australia, the Classification Board’s decision is binding, to varying degrees, on individuals. For instance, now, Australian-hosted sites cannot link to these videos.

Not the Classification Board or ACMA’s Fault

The Guidelines for the Classification of Films and Computer Games provide that the Classification Board classify violent content with an impact higher than ‘strong’ R 18+ and that the Classification Board refuse classification of content that contains gratuitous, exploitative, or offensive depictions of cruelty or real violence that are very detailed or that have a high impact.

The relevant video certainly does have a high impact, and I don’t see a problem with the Classification Board’s decision. It is reasonable.

Similarly, ACMA has an obligation to blacklist (ie, add to the list of websites containing prohibited content, which is distributed to makers of IIA Family Friendly Filters) any site hosting prohibited content overseas. ACMA has no discretion not to blacklist content that meets the statutory definition of prohibited content.

You can, however, blame the people responsible for the law: the members of parliament responsible for passing this law originally, and the members of parliament today responsible for not repealing it.

Not Refused Classification

Although the position was ambiguous initially (and is arguably still uncertain), Senator Stephen Conroy has now stated that the Government wants to constrain mandatory Internet filtering to content that is refused classification (though refused classification content is much broader than his statements suggest).

The notice that I received from ACMA indicates that the content was classified R 18+. It made reference to clause 20(1)(b) of Schedule 7 to the Broadcasting Services Act 1992 (Cth), which relates to R 18+ content that is not subject to a restricted access system.

Although it’s implied, it’s not absolutely clear that the classification for each of the three submitted URLs was the same.

Because this content was classified R 18+ and not refused classification, this content would not be subject to mandatory filtering under a regime that mandated filtering only of content that has been refused classification.

Banned?

The proposed mandatory Internet filtering will only apply to content hosted outside of Australia. Presently, prohibited content hosted outside of Australia is added to a blacklist that you can opt into. Under the proposed system, the subset of prohibited content that is refused classification content would be blocked mandatorily.

However, none of this applies to sites hosted in Australia. ACMA can still issue a take-down, or link-deletion notice, to any site hosting, or linking to, R 18+ content that is not subject to a restricted access system (or other prohibited content). And you can be fined $11,000 per day if you don’t comply with the notice by 6:00 pm the next business day.

There are also state laws that are relevant. For example, the section  75D of the Classification (Publications, Films and Computer Games) Act 1995 (SA) makes it an offence to make available or supply R 18+ content using an online service, unless the content is subject to a restricted access system. So, it appears that it’s illegal for South Australians to link to this video (unless they comply with the very onerous restricted access system requirements). The law in your state or territory may vary.

What’s the Point?

The point wasn’t to criticise the Classification Board’s judgment or ACMA’s judgment. They’re merely fulfilling their obligations under the law. The point was to demonstrate how Australian classification law can affect your ability to view significant material because it is disturbing.

It also illustrates the hopeless of trying to suppress content on the Internet. It took 64 days for ACMA to respond to the complaint, and it’ll take even longer before the content is actually added to the IIA Family Friendly Filters.

Of course, it’s trivial to bypass IIA Family Friendly Filters, and it’ll be just as trivial to bypass any mandatory filter. And there are many sources for this particular content, other than the three URLs that ACMA has now blacklisted.

The final and most important point is that all of this is merely anecdotal. The treatment of this particular content is irrelevant. The question is whether you want to decide what content is significant, and what content is too disturbing, for yourself. Or would you like the Classification Board’s decision to be binding on you?

This post is not intended as legal advice. I make no representations whatsoever as to its quality, and will not be liable for any loss, injury, or damage howsoever resulting from it. Seek independent legal advice.
Censorship chart by Andréia licensed under Creative Commons Attribution 2.0 License.

Read full post »

Tags: ACMA, ACMA blacklist, censorship, clean feed, Iran election prostests 2009, Neda Agha-Soltan