It seems that if you are promoting a product or service these days, it’s mandatory to have an associated “Green Story” to back up your proposition. Earning cold hard cash for the benefit of both you and your customer is in some circumstances frowned upon, if there isn’t an ethical eco-friendly angle to your pitch. While I support green initiatives and do what I can to help with moves to improve the sustainability of the planet, hasn’t it all gone a bit eco-mad.
Those fabled 3 letters, E C O , are being used and abused by all and sundry to get that green tickbox filled. Whether a product in environmentally friendly or not, the ECO label gets thrown around like confetti at a wedding. We have Eco-Homes, Eco-Heaters, Eco-Computers, Eco-Laptops, Eco-Cars, Eco-Trucks… you name it we have it. In a shameless attempt to look more trendy, I’d like to throw my hat in the ring and talk briefly about how appropriate labelling of documents and emails can help save the planet. Eco-Labelling for short.
A code of connection (CoCo) is a mutually agreed set of rules used by two parties to allow the Exchange of information between their systems. The UK government has pursued several initiatives in recent years to connect all government organisations into the secure networks of the central government intranet.
GCSx stands for Government Connect Secure Extranet. This is the network which will specifically connect Local Authorities (LAs) to the central government intranet (GSI – Government Secure Intranet). GCSx relates only to LAs in England and Wales. Scottish LAs will connect through GSX (Government Secure Extranet). Local Authorities must achieve CoCo compliance in order to be access access to the Government Secure networks. Confused yet? Being driven CoCo.Nuts?
Here’s a diagram to help see how it all fits together:
There are jut under 100 controls and measures that a Local Authority needs to put in place in order to be CoCo compliant. The most prominent of these are listed here: Continue reading →
The UK has been awash with scandal upon scandal in recent months. Individuals and organisations who we are supposed to trust have abuse their positions and the circumstances available to them. Is this to be the century of corruption? The politicians led the way with the expenses scandal, immediately followed by questionable banking practices which brought the world to the brink of bankruptcy. Now in our latest installment of the “people doing what they really shouldn’t” saga, we have once reputable press organisations hacking into the phones of, well, pretty much everyone.
The world needs a double dose of the medicine that is corporate responsibility and employee accountability. Whether or not the chiefs at the head of these corporate tribes were aware of the activities of their employees, ultimately they have a duty of care to take reasonable measures to prevent this kind of unacceptable behaviour occurring. Failure to do so is a slippery slope which rapidly evolves from the occasional cheeky rogue, to an inherent culture of wide spread wrong doing. Individuals should not be given a shield of plausible deniability or proclamation of ignorance. Each and every individual should be liable to take responsibility for their actions. Chiefs have a responsibility to foster and enforce an ethical culture through the correct provision of training and providing the right tools for employees to adopt that ethical behaviour.
Just this week we have learned that News International were in fact in possession of emails which were withheld from the police in an attempt to control possible damage from implication of law breaking. Although possible, it’s difficult to release information in an email without actually thinking about it’s content before clicking send. Much more difficult than giving the go ahead to do something in the spur of the moment over the phone. Information created or received by an organisation should be treated with the respect it deserves, but with the casual use of email in day to day life, it’s easy for the lines to blur. People generally use their work email accounts for general informal internal communications, even external at times. When wrong doing is suspected, the legal defence of “that email was sent in this context” is used all to often.
As an organisation, one line of defence to this legal minefield is.. yes.. you have guessed it.. email labelling. Forcing users (whether employees, directors or other execs) to select an appropriate label before sending an email builds not only awareness of company policies, but also re-enforces a culture of employee accountability. Investment in an email labelling tool, could in the long run save your organisation millions or may even save it from the recently bloodied axe, which took out News of the World in one fell swoop. Furthermore, there are no longer any excuses on cost. You can do this for free. Although you don’t get all the benefits of the paid version of Boldon James’ Email Classifier, the FreeMark version of Classifier allows you to do exactly that, label emails. ITS FREE, the clue is in the name – FreeMark. If you want to learn more about the FreeMark initiative, please visit www.freemarkinitiative.com
This article draws on elements of gravity theory to help visualise information security concepts and to describe how to practically implement security policy objectives. It describes a metaphorical model where gravitational forces are analogous to the level of security controls we apply to an organisation’s information. Be warned, this will quite possibly be the nerdiest article I have written, but will be simple enough.. no degree in particle physics required to grasp it.
What is Gravity?
Gravity is a force which attracts and pulls physical objects towards each other. All objects are known to be affected by gravity, from the smallest atom to the largest star in the night sky. A general rule for gravity is, that the greater the mass of an object, the more gravitational force it will exert on the other objects around it. The sun, for instance, pulls the earth towards it in the same way that the earth pulls the moon ever closer as time passes.
At an atomic level, the closer to the center of an object we get, the greater the gravitational force is. As density increases, the movement of those central atoms is more restricted whereas the outer atoms are often able to move more freely.
In the same way as gravity applies force to those atoms drawing them towards the center, we can secure information by applying varying levels of enforcement based on sensitivity. If we imagine the sum of our organisation’s information as a spherical object made up of thousands of information atoms, we can start to visualize the relationship. Our most sensitive information is at the core of our infosphere (information sphere) and we must apply more force to protect it. As we move further towards the surface of our infosphere, the controls we will want to apply will be less restrictive and we will let those less sensitive information atoms move more freely.
It’s been a few months since my last blog. As always work commitments come first and it’s been a bumper couple of months. I’ve been studying the military messaging environment and how it is evolving and summarized my findings in this whitepaper. The main thrust is that organisations should be considering moving away from traditional Military Message Handling Systems (MMHS) approaches in favour of lighter, simpler, COTS based, modular and more cost-effective solutions.
“ The secret of war lies in the communications”
The ability to communicate effectively and without ambiguity has been, and continues to be, instrumental to the success of military organisations across the world. Throughout history military organisations have pushed the boundaries of communication. Military messaging has evolved from smoke signals, to written letters, to telegraphs, to radio, to email and to unified communications today. Sending messages between organisations, units, roles and individuals is paramount to the success of both peace and war time operations.Military information needs to be exchanged in a secure, time-sensitive and standardised manner. This is often done in environments where connections may be intermittent or of low capacity. As messaging technologies rapidly evolve, the systems implemented must keep pace and retain the ability to upgrade easily and in short timeframes. The key challenge for most organisations today is to ensure that these core criteria are met, without over-complicating messaging solutions to the point that they cannot be effectively implemented, maintained or used. This whitepaper discusses the current military messaging environment, how it has evolved to today and the challenges surrounding the technologies. It will then go on to detail an alternative approach to these challenges which we call ‘Command Email’.
The Legacy – Military Message Handling Systems (MMHS) – Implementing Yesterday’s Technology Tomorrow
Traditional MMHS approaches can be summed up as complicated, expensive and fraught with risk.
Web 2.0 was recently crowned the one millionth word of the English language. This is perhaps just one indicator of the impact that Web 2.0 has had on our everyday lives. Why? In this blog, I’m going to go into what Web 2.0 actually is, some of the underlying technologies and what challenges these bring for security. Continue reading →
Data Loss Prevention (DLP) is a newer area of information security and assurance which has arrived in recent years. There are a host of software products, controls and solutions which have found there way onto the market to help facilitate DLP, whether those losses be malicious or inadvertent. This market seems fledgling but is maturing as time goes on. People are just starting to understand the effects of losing data, most of which is lost by mistake. Around 77% of data loss is “inadvertent” and unintended. Basically, people make mistakes. A much lower percentage of data loss is malicious. Compliance seems to be a major driver for the implementation of the solutions and many key security players are positioning DLP as a core element of ongoing strategy. The question I have is, at this stage is are we ready to effectively apply AI(Artificial Intelligence) based systems, where the intended objective is for those AI systems to scan, analyse and more important classify information as sensitive or unimportant?
The DLP market does seem to be a slow starter with a very small percentage of companies intending to deploy, with a further fraction of that minority actually having a deployed system. The bulk of these solutions are what Gartner terms “content aware”. They generally monitor network/email traffic and at the same time deploy agents which can scan internal network resources (file shares, etc) for sensitive data which is available where it shouldn’t be. The idea is, that when sensitive information is located, it should be either removed, quarantined, blocked in transit or authorised to remain in place or be distributed. The problem is, that while it is easy enough to recognize information like credit card numbers, it becomes exponentially more difficult for these systems to understand more qualitative content. Qualitative content (e.g. information that is expressed in verbose literal wording and not distinctive formats or patterns) is difficult for an AI system match up against a particular pattern or template for it to effectively classify the information. Examples of this type of information may include, a new product idea for an investment bank, a ground breaking formula for a new medicine in a pharmaceutical company or perhaps even a world cup winning team strategy for a national football team. Information of this nature is usually specific on a company-by-company basis and also a case-by-case basis. One sports team strategy may not look anything like another.
It is for this reason, the term “False Positive” is becoming widely used in the market and anyone who’s worked with DLP systems (or tried to deploy one) will Continue reading →
Probably one of the more interesting news stories this month is the revelation of Google admitting that it packet sniffed on unsecured public Wi-fi networks. Read news here.
It appears that Google Street View cars were driving around taking pictures of various locations, but were also kitted out with network sniffers that could connect to unsecured public wi-fi access points, monitor and record data transmissions across those networks. Naughty stuff Google. This went on for a total of 3 years and accordingly to Google the activity was a “simple mistake”. This continues to re-affirm beliefs that public Wi-fi networks are serious security risks for both individuals and companies. If one of the world’s largest IT monopolies can do this by accident, cough, what could a determined plan of attack achieve.
So how did they do it? The answer is, without rocket science. It’s easy enough to connect a laptop to an unsecured wi-fi network as no passwords are required. Once connected, you can run a network sniffer to see what’s going on. Why not try it yourself on your own network? Try Wireshark, or perhaps Cain and Abel if you want a little more security analysis.
For an intro to packet capture and analysis using Wireshark, spend a couple of minutes watching this video:
If the answer is yes. The BackTrack (BackTrack 4 – www.backtrack-linux.org) pentration testing OS would beg to differ.
BackTrack 4 manifests itself in an entirely customised distribution of Linux. The underlying Linux distro is Ubuntu, but has been specifically enhanced, configured and packaged for the purposes of penetration testing. Within the package you receive a wide variety of wireless cracking, network scanning and password breaking tools.
There are several options you can select for running BackTrack to start your activities. You can install it as an OS on your harddrive, you can install it and run it from a USB stick and you can even run the entire OS from CD. The latter option requires no installation at all. You simply select a machine, boot from the CD and then remove the CD when finished. I chose the latter option for running my tests to see if it really worked.
I started by booting the OS and starting x windows. Most work is doen from the Konsole terminals. In short there are 4 key utilities you can use to crack WEP and WPA keys. These are:
airmon-ng: Used to put your own wirless card into monitor mode.
airodump-ng : Used to collect wireless packets and save them to disk.
aireplay-ng: Used to implement a number of replay attacks on the Wireless Access Point(AP). In our scenario this is useful to make the AP accept or generate more packets. Cracking wireless is generally about getting enough packets (100k-500k) to derive keys.
aircrack-ng: Used on the collected packets to find the keys.
Check out these videos for a step by step example.
Disclaimer: You should be aware that is illegal to hack into a wireless network that you do not own. This example is for test and education purposes only.
Any determined attacker can usually find away to get access to your networks, but here are four tips to make it much more difficult:
Use WPA encryption – its more difficult to crack than WEP.
Restrict network access to known MAC addresses – MACs can be spoofed but it’s another hurdle to delay.
Switch it off when you are not using it – If there is nothing in the air, there is nothing to analyse. The information an attacker requires to crack the keys is simply not there.