That square QR barcode on the poster? Check it’s not a sticker
Crooks slap on duff codes leading to evil sites
Cybercrooks are putting up stickers featuring URLs embedded in Quick Response codes (QR codes) as a trick designed to drive traffic to dodgy sites.
QR codes are two-dimensional matrix barcode that can be scanned by smartphones that link users directly to a website without having to type in its address. By using QR codes (rather than links) as a jump-off point to dodgy sites, cybercrooks can disguise the ultimate destination of links.
Security watchers have already seen spam messages pointing to URLs that use embedded QR codes. Now crooks have gone one step further by printing out labels and leaving them in well trafficked locations.
Warren Sealey, director enterprise learning and knowledge management, Symantec Hosted Services explained: “we’ve seen criminals using bad QR codes in busy places putting them on stickers and putting them over genuine ones in airports and city centres.”
Sealey, made his comments at the Ovum Banking Technology Forum 2012 in London on Wednesday.
Sian John, UK security strategist at Symantec, said: “There has been an explosion in the number of QR codes over the last couple of years, and cybercriminals are taking full advantage. Because QR codes just look like pictures it’s extremely difficult to tell if they’re genuine or malicious, making it easy to dupe passers-by into scanning codes that may lead to an infected site, or perhaps a phishing site.
“If users want to make sure that their mobile is protected they should consider a QR reader that can check a website’s reputation before visiting it,” she added.
Eurograbber Media Alert: Check Point and Versafe Uncover New Attack
Research Reveals an Estimated 36+ Million Euros Stolen from Banking Customers across Europe
Check Point® Software Technologies Ltd. (Nasdaq: CHKP), the worldwide leader in securing the Internet,and Versafe, a private and independent vendor of online fraud prevention solutions, today published “A Case Study of Eurograbber: How 36 million was stolen via malware”. The case study uncovers a highly sophisticated attack used to steal millions from corporate and private banking customers across Europe.
Eurograbber was launched against banking customers, using a sophisticated combination of malware directed at computers and mobile devices. The malware, in conjunction with the attackers’ command and control server, first infected the victims’ computers, and then, infected their mobile devices in order to intercept SMS messages to bypass the banks’ two-factor authentication process. With the stolen information and the transaction authentication number (TAN), the attackers then performed automatic transfers of funds, ranging between €500 and €250,000, from the victims’ accounts to mule accounts across Europe.
- An estimated €36+ million has been stolen from more than 30,000 corporate and private bank accounts.
- The attacks originated in Italy, but quickly spread to Germany, Holland, and Spain.
- The theft involved a sophisticated combination of malware directed at computers and mobile devices of banking customers.
- A new and very successful iteration of a bot attack (the Zeus Trojan) was used in the widespread Eurograbber attack.
- Android and Blackberry mobile devices were specifically targeted, showing that attacks against Android devices are a growing trend.
“Cyberattacks are constantly evolving to take advantage of the latest trends. As online and mobile banking continue to grow, we will see more targeted attacks in this area, and Eurograbber is a prime example,” said Gabi Reish, Head of Product Management at Check Point Software Technologies. “The best way to prevent these attacks is with a multi-layered security solution that spans network, data, and endpoints, powered by real time threat intelligence.”
“Cyberattacks have become more sophisticated, more creative, and more targeted than ever before,” said Eran Kalige, Head of Security Operation Center, Versafe. “As seen with Eurograbber, attackers are focusing on the weakest link, the people behind the devices, and using very sophisticated techniques to launch and automate their attacks and avoid traceability.”
Check Point provides comprehensive protection for both enterprises and consumers against all types of threats. Check Point Gateways running Check Point Software Blades, such as Antivirus, Anti-bot, and IPS, can detect and prevent the Eurograbber attack. Check Point Threat Cloud™, the first collaborative network to fight cybercrime, feeds software blades with real-time intelligence and signatures enabling the gateways to identify and block attacks, including malware detection and bot communications, which are key elements of the Eurograbber attack. Additionally, Check Point’s ZoneAlarm solutions protect home users’ computers from Zeus Trojan variants and other malware and online threats.
Versafe’s technology and products detect and prevent attacks, like Eurograbber, in real-time. With its unique set of components installed on a bank’s website, Versafe protects online users who log onto the website. By leveraging components such as the vHTML, Versafe can detect zero-day malware. Additionally, Versafe vCrypt eliminates malware functionality and renders the attacker’s database useless. Versafe offers financial organizations, who are operating online, the ability to gain and maintain control over areas that were previously unreachable and indefensible, enabling them to protect their end users seamlessly.
The case study provides step-by-step insight into how Eurograbber was executed against thousands of banking customers across Europe, and it includes solutions for both consumers and enterprises to prevent these types of attacks. For the full report, please click: http://www.checkpoint.com/products/downloads/whitepapers/Eurograbber_White_Paper.pdf
Practicing for cyberwar
The Pentagon is building a virtual city that will enable government hackers to practice attacking and defending the computers and networks that increasingly run the world’s water, power and other critical systems.
To reinforce the effect of those attacks, the cyber-range, known as “CyberCity,” will include a scale model of buildings and other facilities that will physically respond when attacks have been successful — or unsuccessful.
Big Backup Challenge Of Medium-Size Data Centers
Heavy on virtualization, but stuck with a legacy backup solution? Here’s how to choose a virtualization-friendly backup app.
Data centers of all sizes struggle with securely and reliably protecting their data, but the medium-size data center might have the most unique set of challenges. These organizations tend to be heavily virtualized, have very dense virtual machines to host ratios, and be very dependent on their applications to drive the business. They also tend to be the tightest on IT staff and on dollars.These organizations are often referred to as small- to medium-size businesses (SMB) or small- to medium-size enterprises (SME). I find both these terms too broad because they can range from a very small business with no servers to a relatively large business with dozens of servers. Also, many of the data centers in this group are local, state and federal agencies, so they don’t typically fit the standard business mold.
More Storage Insights
- Learn how to store up to five times more data – at lower costs
- Optimize Your VMware Environment and Accelerate Your Cloud with IBM SmartCloud Foundation
- Big Data Analytics Guide 2012
- IDC LINK Quest Aligns Data Protection and Recovery to a Service Continuity Model
In general the medium-size data center has dedicated servers, most of them virtual, performing tasks such as email, collaboration and file sharing. In most cases they have a database server running a few off-the-shelf applications they have customized to some extent. They have shared storage typically on iSCSI SAN or a NAS running NFS to host their virtualized images. This medium-size data center could also be a computing pocket within a very large organization that for practical reasons needs to manage its own IT resources.
[ Read Storage Virtualization Gets Real. ]
Although these organizations tend to struggle with the storage systems supporting their virtual infrastructure, data protection seems to be the harder problem to address. This is probably because they are not yet at the point where they are generating enough storage I/O requests to justify a larger, solid state disk (SSD)-heavy, enterprise-class system. In a recent test drive we showed that intelligently adding a little SSD could improve performance and these organization are fine with that.
Backup and data protection is another challenge altogether. Again these organizations are heavily virtualized, short on staff, and in many cases don’t have a second site to replicate data for disaster recovery. They often have started with legacy backup solutions or the backup solutions that came with their operating systems. The problem is they are too heavily virtualized for these types of applications and could benefit from an application that is more virtualization aware. Enterprise backup applications do a good job of this but tend to be too complex and too expensive for this environment. As a result, many of these companies turn to virtualization-specific data-protection products.
There are three key things that the medium-size data center should be looking for when it comes to selecting a backup application for the virtualized market. First, can it afford the app? It really doesn’t matter how great the features are if there is not enough budget to get the product. I suggest talking price first before downloading and installing anything.
Second — and still before installing — understand what the application’s capabilities are for getting the data off site. This is especially important if you don’t have a secondary site to send data to. Does the software product that you are considering have the ability to leverage a cloud storage facility to send data to? And if so, is this a service you can leverage your relationship with to use the cloud for other purposes?
Third, is the product easy to use and does it have the features you need to accomplish the task at hand? This part does require downloading and installing a trial of the product. The good news is that in the virtualization-specific backup space, downloadable trials seem to be the common distribution method. But this is also why the first two suggestions above are so important; you don’t want to and probably don’t have time to try every single product on the market.
Certainly there is more to selecting a backup solution for the medium-size organization, but the above is a good start. We discussed many of the remaining issues that need to be considered in our recent webcast The 4 Headaches of Backing Up The Virtualized SMB.
The next steps in going virtual up and down the stack, from network to desktop: Automation and finally taking hypervisor security seriously. The two go together, because if you’re going to trust production systems to run without human intervention — a must for delivering IT services on demand — you’d better be darn sure attackers can’t gain control. Get our 2013 Virtualization Management Survey report now. (Free registration required.)
Disaster recovery: A necessity, not a luxury
Disaster recovery is no longer a massive expense with the advent of virtualisation and cloud, and with innovations in the disaster recovery sphere making it easier to deploy and maintain a DR site it is an insurance policy all enterprises should look at, according to regional experts.
There are many threats that will cause business disruption, from man-made threats such as viruses or war, to environmental threats such as floods and earthquakes. Many can be broadly classified as security threats, including virus attacks, hardware/firmware corruption, accidental deletion, hacking or physical break-ins.
According to Yasser Zeineldin, CEO at eHosting DataFort, 56% of business disruption threats are related to hardware/software, power, telecommunications, 20% have a malicious intent, and 24% are natural disasters.
This last category may be one of the most important trends in disaster recovery, as climate change is believed to be responsible for an unprecedented level of losses due to natural disasters. For example, recent events like Hurricane Sandy in North America caused approximately $50 billion in damages, including leaving eight million homes and businesses without power.
The intensity and widespread nature of such natural disasters means backup and recovery centres need to be located far enough out of region for adequate safety and recovery.
Disaster recovery is a very big trend for 2013 and beyond, and for many companies it is an insurance policy that they did not think they required, until they needed it. The notion that ‘my data will never be affected’ was prevalent in the market, but now there is the realisation that it is important to have a disaster recovery plan, particularly from a compliance point of view.
“Across the region there is a lot more interest in disaster recovery from both the enterprise and the SMB sector. The fact of the matter is yes it is becoming a lot more affordable and manageable and the complexity of disaster recovery solutions has come down enormously in the last few years,” explains Aman Munglani, research director at Gartner.
The cost of hosting has also been coming down rapidly, making deploying a disaster recovery site much more available to a larger number of companies. This means that when the primary site and secondary site belongs to in-house the third site can either be with a service provider or a disaster recovery provider.
From an overall perspective the falling cost per gigabyte and the availability of scale-up systems are making disaster recovery faster and cheaper.
According to Philippe Elie, director of operations, EMEA, Riverbed, disaster recovery is a hit in UAE and GCC at the moment.
“I was attending and IDC event in Saudi one month ago and when I was polling the audience, the great majority was thinking of or already deploying a disaster recovery product and the trends are the focus on security and the threat around security and the fact now that it is getting cheaper,” he says.
While disaster recovery is an old hat technology, its adoption has not been as invasive or common as virtualisation because of its cost. However, that is changing. Basil Ayass, enterprise product manager at Dell Middle East says the major trend he is seeing is that disaster recovery adoption is spreading rapidly.
“Adoption is increasing, especially with the recent hurricane Sandy in the US, and we have had the security breaches at Aramco, RasGas that were made public and the press reported heavily on those. So, customers are aware that today the security situation is getting worse and all enterprise customers today have disaster recovery in place or have plans with aggressive timelines to implement it,” he states.
Organisations are also now trying to achieve more with less in the disaster recovery sphere, leading to an increasing reliance on virtualised technology. Within the disaster recovery environments there is also more emphasis on access to mobile devices and smartphone technology.
“In disaster recovery scenarios, most people will still have access to mobile devices, so they can achieve the core functionality of their business and that is coming a key piece of technology in disaster recovery. We are also starting to see the green shoots of the move towards cloud technologies, not yet being fully adopted, but I think that moving forward that will become very relevant to disaster recovery,” says Allen Mitchell senior technical account manager, MENA. CommVault.
Companies also need to structure in their data and the way that it is stored, companies can not afford to wait for hours or days to recover data. The recovery window is the time it takes to recover data from the back up site to the main site and enterprises often complain of bandwidth problems here in the UAE. CommVault says that with the bandwidth issue, the recovery window is far too long.
Virtualisation in disaster recovery
Dell says that along with the trend of trying to make disaster recovery more flexible and easier, comes the fact that in the past disaster recovery required you to replicate your primary data centre, so whatever you have invested in data centre A, you have to go and replicate it and use the same equipment by the exact same vendors.
“Everything in A had to be copied in B and most people don’t even know what they have – they bought it five years ago or more and maybe some of the vendors are out of business, the technology is old, that is why it was so complex. Today the trend we are seeing in disaster recovery is that it does not have to be a replication of your primary site. Now with solutions like virtualisation we have virtual replication, virtual storage, virtual networking, which enables you to build a disaster recovery site that is a fraction of the size and cost of your primary site and it does not have to be with the exact same vendors,” explains Ayass.
Data protection strategies used to be focused on making a local backup copy on tape and hauling the tapes offsite for protection. Today there are many innovations to electronically move the data over the network to lessen the data loss and improve the recovery times. For example, virtual tape systems enable a disk-to-disk data transfer to the backup facility before moving data to tape. In addition, synchronous and asynchronous data replication between disk systems electronically sends data offsite. Finally, a remote storage virtualisation network provides continuous availability by caching and replicating data in real time.
Commvault says that most organisations are moving away from a reliance on tape so it is much easier for organisations to use and recover disk copies, and deduplication is reducing the size of data and making it more bandwidth efficient, which is easing the process of moving it from site to site. Replication technologies are making life easier as well as the more intelligent storage applications and devices are easing the burden of disaster recovery.
“These storage and data protection systems take advantage of innovations in inter-data centre networking which now offer more saleable, low latency and deterministic data transfer to the recovery site. For example Carrier Ethernet network services more easily scale in granular increments from 100 Mbps to 10 Gpbs speeds than SONET/SDH technology, which required costly stair steps to move up in bandwidth.
And packet optical systems can go all the way to 100 Gbps if needed. This networking flexibility enables companies to more easily afford the right amount of bandwidth and performance for the job, instead of inefficient over-provisioning or insufficient data throughput,” explains Mervyn Kelly, EMEA marketing director, Ciena.
Future technologies will enable even more sophisticated Performance-on-Demand to tailor network performance, capacity, and latency and cost parameters to individual enterprise disaster recovery requirements.
According to Injazat Data Systems, current regional technology trends in the disaster recovery field mainly focus on two areas, virtualisation and cloud-based disaster recovery which offer easier and far more affordable disaster recovery solutions.
“Although virtualisation has been around for some time, it is only now that virtualised environments in the region are getting to a mature and stable stage where IT Executives are feeling comfortable to extend it to their disaster recovery strategy,” says Chris Bester, senior consultant, Injazat Data Systems.
Cloud in disaster recovery
Public Cloud-based disaster recovery is another trend in the disaster recovery space, however, according to Injazat, it is a bit slow in the uptake mainly because organisations are still reluctant to relinquish control of their data (albeit for recovery purposes) to a third party public cloud provider where the data from multiple entities are hosted.
“On the other side of the coin though, as building your own private cloud environment for disaster recovery is not viable from a cost benefit perspective, we saw an increase in enquires and requests in recent months for hosting organisations like ourselves to provide private cloud services,” explains Bester. “Further to this, outsourcing data centre services for disaster recovery is becoming more appealing to many organisations as one of the alternatives when considering setting up a disaster recovery site.”
Outsourcing disaster recovery sites:
A current trend in the market is having hosting or service providers to provide disaster recoveryspace, hardware and cooling. These companies are putting up data centres in the UAE and in Qatar, Saudi and Egypt and are providing cost effective disaster recovery locations.
One of the biggest obstacles in the past to building a disaster recovery site was that companies had to buy real estate and build a new data centre, which was a huge obstacle for most medium businesses, now that they do not need to have their own space, they can move to a service provider and that service provider will provide the space for them and then they can own a DR site without paying for the space, the power, the cooling, the outsource that to a service provider, and that makes disaster recovery a lot more attainable.
“With the new innovations, we are providing our customers with very small servers; we have moved storage and networking into blade. Today we have a disaster recovery in a box kind of a solution. So, to build a disaster recovery site you no longer need a room or a data centre, you just need a part of a rack, it can be more than enough for a lot of businesses and they just protect their mission critical applications in case of disaster they need the company to continue running,” states Ayass.
There is also another option for disaster recovery for the larger companies, moving into a three-site disaster recovery. These large enterprises are worried that if the disaster recovery site is in the same country it could be impacted, so they are moving into a third site and that is where the trend of mobile data centres is beginning to emerge.
“We are seeing requirements in countries like Saudi, Kuwait and Egypt where companies want a disaster recovery site, but they want it to be mobile so they can move it in case of a revolution, so we are building these mobile data centres. We get a container, a standard shipping container and we build disaster recovery sites inside the container and you can put them on the roof, in a basement, anywhere and if you need to move in case of a major disaster you can put it on a truck and drive it and your DR site can go with you to wherever you are relocating to,” explains Ayass.
However, outsourcing your disaster recovery site may not be the right move for all companies.
When to outsource your data:
The deployment and location of the disaster recovery site is dependent on many factors. Large organisations with over $1 billion in revenue typically own an average of four data centres totaling 60,000 square feet. These companies may find it more economical to set up inter-data centre networks and use their own facilities for backup/recovery or even move to an active/active configuration for more continuous availability.
Smaller organisations may typically have consolidated into a single data centre. These companies may use a disaster recovery data centre from vendor such as SunGard or IBM BCRS to provide recovery services.
“There is not a single answer for every company’s disaster recovery needs, so most organisations need to put a lot of time into developing their Business Impact Analysis to determine the right strategy for their organisation,” states Ciena’s Kelly.
Managing data centre disaster recovery in the right way requires highly specialised personnel and creates infrastructure demands and additional costs for an enterprise. These requirements are reason enough for many organisations to consider whether they should handle disaster recovery in-house or utilise a third-party DR provider.
eHosting DataFort has actively worked with various clients who have invested in outsourcing disaster recovery and business continuity. According to Bester, some larger institutions tend to keep their disaster recovery in-house due to various factors but others with limited resources tend to outsource.
“Whichever route you take, your decision should be based on solid business intelligence gathered through a proper Business Impact Analysis [BIA] and Risk Analysis [RA] process. We experienced a recent increase in requests for assistance with the BIA process from various organisations across a number of business sectors as they are typically trying to solidify their disaster recovery strategies to accommodate regulatory and business demands,” he states.
In the Middle East disaster recovery is mainly kept in house according to Sam Tayan, regional director, VMware MENA.
“When someone is keeping your data for you in terms of the integrity if that data the value of that data, you look for regulations, and maybe that is an area where more could be done, and you also look for other aspects such as insurance underwriting and things like that. Again that is an area in this part of the world that there is work to be done. If you are a company that wants to outsource your disaster recovery, you are more likely to do that when you have a well understood regulatory environment, a well understood insurance underwriting environment as well, and that is why here the tendency is to do it in house,” he explains.
Fresh air cooling
Another trend is fresh air cooling, one of the prohibitive things for DR is the prohibitive cost of DR, especially in the summer that we experience in the Gulf, and so Dell is building solutions around fresh air.
Dell works with both eBay and Bing, and both of these customers have asked Dell to build disaster recovery and data centres for them that survive up to 45 degrees Celsius.
These companies are building disaster recovery sites and they are not cooling them, they are leaving them in open air and using fresh air to keep it running.
“We are providing fresh air solutions to our largest customers and now we have made that available to smaller customers and customers across the Middle East, which saves on CAPEX and OPEX.,” says Ayass.
Focusing on the big picture
Yasser Zeineldin, CEO at eHosting DataFort says that an IT platform is only well-designed and robust when it’s supported by an equally well-designed and robust infrastructure. Organisations must not narrowly focus on specialised segments such as servers and app design, but also understand how all aspects of the infrastructure make a huge impact on systems when considering scalability.
“IT managers must take an holistic view of clients systems and make sure all factors are taken into account when designing solutions. Enterprise architecture including networks, servers and applications must be clearly understood to know how they are relay, interact and impact scalability and performance,” he states.
Understanding how to tie together diverse network components to guarantee uninterrupted operations is crucial to a sound technology management operational plan. While time consuming, these are critical processes to ensure an organisation’s ability to recover from unplanned events with minimal or no disruption of services and operations.
Regional DR investment
Costs have become more manageable and that is making disaster recovery more affordable and more achievable to most large organisations.
The investment in disaster recovery is always tied to the business. Typically a Business Impact Analysis is performed to measure the impact of each application, and ranked according to its criticality of sustaining business. This helps set a recovery point objective – how much data might be lost in a disaster, and the recovery time objective – how fast does this application need to be restored. In addition, threats to be considered may vary by geographic location so each data centre environment is unique.
This analysis helps set the strategy and architecture for disaster recovery required and cost justification for the business. So companies are becoming more aware of the threats because of a number of factors, including these natural disasters and an increasing level of mandatory data protection regulations from government or industry.
In the region, as the costs of setting up a disaster recovery site have become more manageable, vendors have been able to provide companies with more flexibility in terms of what they can purchase to set up a disaster recovery site.
“We see them investing more in disaster recovery and we see not only companies that have disaster recovery invest more in it, but new companies start moving into a business continuity or disaster recovery solution,” explains Ayass.
The Arab Spring fostered the shift of international and regional companies to a more stable political environment, thus making Dubai and the UAE as a whole as an attractive option to relocating regional offices/operations or at least having a secondary disaster recovery site located here. In the current sensitive political scenario in the Arab region as well as the erratic economic conditions around the world, the UAE has emerged as an obvious safe haven, providing comfort through a host of opportunities that boost investor confidence and attract a large influx of people eager to set up business here.
Additionally, the UAE’s strategic geographical location and the strong infrastructural network are key contributors to the increase in foreign direct investment in the ICT sector. The political insecurities in the region served as a good wake-up call to Middle East enterprises on the importance of investing in a disaster recovery strategy for the long term.
“The unpreparedness and losses have prompted organisations to gear up for the future through putting in place disaster recovery solutions. Despite the chances of disasters actually occurring from a political crisis are minimal, organisations are now aware of the potential damage of downtime and data loss to a business operation,” says Zeineldin.
In Middle Est-based organisations with established disaster recovery programmes, executives now expect their IT department heads to harness the benefits of current technology offerings such as virtualisation and cloud-based solutions to provide data recovery for exactly the same or even a lower budget.
Copyright 2012 ITP Business Publishing Ltd.
Provided by Syndigate.info, an Albawaba.com company
All Rights Reserved
Top 10 myths about technology
There are many myths and half-truths about technology and as more people start to make use of everyday devices, many of the once-believed-to-be fact stories are being disproven. In the spirit of today being Friday the 13th, IT News Africa takes a look at some of the most mind-boggling myths that have been exposed as untrue, or only partially true.
Charlie Fripp – Consumer Tech editor www.itnewsafrica.com
- As systems started to evolve and become more complex, the ‘SysRq’ key became redundant with no standard use (image: Charlie Fripp)
As systems started to evolve and become more complex, the ‘SysRq’ key became redundant with no standard use (image: Charlie Fripp)
1. Email is better for communicating, all the time With the growth of many platforms to stay connected and spur on communications, email is certainly better in some situations, but not all the time. While most people have an available connection to email, it is important to note that a traditional phone call will yield results far quicker for important information. If a short answer is needed, it might be better to make use of Instant Messaging or send a message on social networks such as Facebook or Twitter.
2. Every computer user is as computer-literate as I am As mentioned in the introduction, more people are making use of electronic devices to stay in contact or to conduct business, but not everyone is on the same level. Much of the older generation is just getting used to the idea of social networks, IM and Skype, and it cannot be expected of them to be as literate as a generation that grew up surrounded by technology. Some people also just have a knack for technology and will tinker with just about anything, but truth be told, not every computer user will know how 3D technology and smartphones work – or even email, for that matter.
3. A 64-bit Operating System will make computing twice as fast as a 32-bit system While a computer running on a 64-bit operating System (OS) will generally run faster than a 32-bit OS, this is only a half-truth . When operating on 64-bit, the programs running on the computer also need to be compatible with the 64-bit system. Luckily most software vendors today will release separate 32-bit and 64-bit software, which will make a difference in computing speed. Running a 32-bit version on a 64-bit machine, will, however, yield almost no noticeable difference in speed – unless the machine has more than 3GB of RAM.
4. Expensive HDMI cables will improve your HDTV quality This is a myth in which the salespeople at electronics stores love to indulge in – but in reality your HDTV will not care which HDMI cable you use. Digital audio/video standards like DisplayPort, DVI, and HDMI do not suffer from interference and disruption as an analog audio or video signal does, and only a huge drop in signal voltage will cause it to lose signal. HMDI cables are generally all made from the same material, while some claim to be gold-plated or treated with different coatings – but even if there was a difference in screen quality, it will be on a microscopic level that will be almost impossible to detect with the naked eye.
5. You always need to ‘eject’ a USB device before unplugging it Being a half-truth, it is a valid statement only under certain conditions. The only reason why users are urged to ‘eject’ the USB before unplugging it is to make sure that whatever data was being transferred has finished copying over – otherwise the data will be corrupted. However, if the USB was plugged in to check on its contents or to move data from it, there is no need to ‘eject’ the USB, as nothing was written to it. Devices like a keyboard, mouse, printers and scanners can be plugged out without having to ‘eject’ them first – provided that they are not switched on.
6. In photography, bigger Megapixels are always better This is another favourite sales pitch from floor staff at electronics stores, which is simply not true. Most of the time better photographs are dependent on the skill of the person actually taking the photo. But having more megapixels (and paying more for a camera with it) does not give you a better quality photo – it only starts becoming a factor when images are hugely enlarged. “Even when megapixels mattered, there was little visible difference between cameras with seemingly different ratings. For instance, a 3-megapixel (photo) pretty much looks the same as a 6-megapixel (photo), even when blown up to” 12 inches by 18 inches,” comments photography expert Ken Rockwell. In truth, the majority of people will view their images in a standard size, where megapixels do not matter.
7. Having more bars on your mobile will give you better mobile reception The signal bars on a mobile phone only indicates the strength of the signal from your mobile phone to the nearest mobile tower, but it is no guarantee that the service will be any better than when the mobile phone only has one bar. A user’s signal to the tower might be strong, but if there is only one tower in the area, a lot of people will be connecting to the same tower at the same time, which will cause a loss in the quality of the communication. Once your signal makes contact with the cell tower, it has to compete against other users through the service provider’s backhaul network – which could lead to poor quality.
8. Your Internet Service Provider can track everything that you do onlineWhile it will certainly cause an outrage with digital privacy advocates, in theory it is quite possible. When connecting to the Internet, the ISP is a user’s link to the outside world, and all traffic to and from the user needs to flow through the ISP’s network of routers. So in theory, an ISP has the access and capabilities to scan user traffic, but it it is simply not feasible. “Fortunately for us, it doesn’t have the money or the desire to archive every bit of information that comes its way. ISPs don’t routinely save the Web surfing histories and e-mail conversations of their users. It would simply be too expensive to save all of that data and the public outcry from privacy rights and civil liberties organizations would be deafening,” comments Dave Roos from Get Stuff. Some websites, however, do keep track of IP addresses, which can be traced back to individual users.
9. Airport scanners can damage a digital camera’s memory card There simply is no truth to the myth, although some users will still be a little nervous when sending their camera and memory cards through an airport’s X-Ray scanner when coming back from a holiday to the Maldives. If scanning equipment damaged memory cards, there would be a huge outcry and regulations would have been put in place to stop it from happening, but the little plastic fellows are actually very robust. In tests conducted by Digital Camera Shopper magazine, memory cards survived a dip in a fizzy cold drink, a spin through a washing machine, being run over by a skateboard, and the playful nature of an unsuspecting six-year-old child.
10. The ‘SysRq’ button on a keyboard has an actual function While strictly speaking not actually a fable, we could not resist the temptation to include it in our list of the biggest tech myths, as the ‘SysRq’ (System Request) key is one of the biggest mysteries since the invention of the keyboard. Ask any technology fundi what that key actually does and I guarantee they will be wrong. There is a rumor that when pressed, the computer simply registers that it has indeed been pressed – and nothing more. According to Wikipedia, it was “introduced by IBM with the PC/AT, it was intended to be available as a special key to directly invoke low-level operating system functions with no possibility of conflicting with any existing software.” But as systems started to evolve and become more complex, the key became redundant with no standard use. So much so that electronics manufacturer Lenovo started removing the ‘SysRq’ key from their keyboards in 2010.
The great Wi-Fi phishing: police to patrol down your street
The Queensland Police fraud squad says it will be the first police force in the world to go on “wardriving” missions to warn homes and businesses if their wireless networks are not secure.
July 21, 2009, Asher Moses, www.smh.com.au
Detective Superintendent Brian Hay said criminals were piggy-backing on the WiFi connections of ordinary computer users and using them to anonymously commit crimes such as fraud and identity theft.
The process of searching for open wireless networks using a laptop or handheld in a moving vehicle is known in the geek community as “wardriving”.
Many home networks can be accessed by anyone within range because strong security settings are often not enabled and passwords are rarely changed from the default setting.
“All unsecured WiFi networks out there are open for exploitation by the crooks and the average mum and dad don’t understand the vulnerabilities,” Detective Superintendent Hay said in a phone interview.
“More and more houses are going into WiFi and setting up multiple computers on a network, and not appropriately securing them.
“These things are going to be exploited more and more as time goes on … we want to close the holes before too much damage is done.”
Detective Superintendent Hay said it was important for police to get “ahead of the game” as crooks were now sharing information on satellite maps showing vulnerable areas with large numbers of unsecured networks.
He blamed computer equipment sellers for not doing enough to educate customers on the importance of security.
He said it was illegal to use someone else’s network bandwidth without their permission, even if that bandwidth was not used to commit another crime such as identity theft.
Queensland Police has not yet decided how many officers it will task with seeking out unsecure networks, but it is calling on the private sector to help out with equipment and expertise.
Detective Superintendent Hay said the operation would be limited to Queensland but the idea might filter down to other states.
“I actually have not heard of this being done anywhere else in the world,” he said.
“It’s not about catching the bad guys as much as limiting their area of operations.”
Detective Inspector Bruce van der Graaf, head of the NSW Police Computer Crimes Unit, said he was watching the Queensland Police operation with interest.
“Apart from notifying people that their wireless is unsecure I don’t know what else would be achieved by it but if their trial is fruitful we’d always participate in something that works,” he said.
The Queensland operation could attract criticism from those who believe police time would be better spent seeking out drug dealers and robbers, but Detective Superintendent Hay said the issue was just as important as any other.
Criminals could steal information from computers on vulnerable networks and also use other people’s internet connections to launch malware and other cyber crime attacks.
“If we save mum, grandma and grandpa from losing their life savings, having their identity stolen or losing their kids’ inheritance … you ask them if they think it’s a good use of police time and resources,” he said.
In Defense of DDoS
Are DDoS attacks just another form of civil disobedience?
Judging by the last two weeks, being an enemy of Julian Assange is only marginally less stressful than being Julian Assange. Amazon, PayPal, MasterCard, and Visa, which all moved to cut ties with Assange’s WikiLeaks after the site’s release of diplomatic cables, have been the targets of distributed denial-of-service attacks from a group that calls itself “Anonymous.” There is nothing fancy going on here. DDoS attacks simply aim to send more traffic to a target site than it can handle, slowing it down or making it temporarily unavailable. Many prominent Internet personalities, including John Perry Barlow and Cory Doctorow, have spoken out against DDoS on the sensible-sounding grounds that one can’t fight for free speech by limiting it for others. How, then, does Anonymous defend its actions? In a press release (PDF), the self-described “Internet gathering” explains that its “goal is to raise awareness about WikiLeaks and the underhanded methods employed by … companies to impair WikiLeaks’ ability to function.” For this author, however, the most interesting bit of the press release comes in the next paragraph: “[A DDoS attack] is a symbolic action—as blogger and academic Evgeny Morozov put it, a legitimate expression of dissent” (italics theirs). Yes, it’s true: I did write those words. Under certain conditions—some of which, I believe, are present in the case of Anonymous—DDoS attacks can be seen as a legitimate expression of dissent, very much similar to civil disobedience. In other words, there are cases where DDoS attacks have more in common with lunch-counter sit-ins than with acts of petty vandalism. There is a legal precedent for such comparisons. In 2006, a court in Germany, asked to decide whether a DDoS blockade of Lufthansa for allowing its planes to be used in the deportation of asylum-seekers was tantamount to a demonstration, opined that the civil-disobedience analogy is valid. (Germany being Germany, the organizers of the cyber-attack on Lufthansa’s site had first asked the local authorities for formal permission to go ahead but were turned down.)
Declaring that DDoS is a form of civil disobedience is not the same as proclaiming that such attacks are always effective or likely to contribute to the goals of openness and transparency pursued by Anonymous and WikiLeaks. Legitimacy is not the same thing as efficacy, even though the latter can boost the former. In fact, the proliferation of DDoS may lead to a crackdown on Internet freedom, as governments seek to establish tighter control over cyberspace.
Likewise, assessing the legitimacy of a particular DDoS attack is not the same as assessing its legality: There is no disputing the fact that DDoS is illegal in many countries (hence the “disobedience”). Thus, to figure out which cases of DDoS may deserve some leniency from the judges, we need to shift the focus away from the medium and on to the message.
John Rawls, one of the most influential philosophers of the 20th century, offered one of the best modern theories of civil disobedience in his 1971 masterpiece, A Theory of Justice. Rawls defended civil disobedience as long as the breach of law was public (i.e., authorities were notified of the disobedient act before or shortly after it occurred), nonviolent (i.e., the disobedient act did not impinge on the civil liberties of others and caused no injuries), and conscientious (i.e., the disobedient act was underpinned by serious moral convictions). Furthermore, Rawls argued that those who practice civil disobedience should be willing to accept the legal consequences of their actions, if only out of their fidelity to the rule of law.
Some elements of Rawls’ theory are not indisputable—Bertrand Russell, for example, believed that some violence might be acceptable, for it could force the media to pay attention to issues that may otherwise go unnoticed. Still, Rawls’ theory offers an elegant template for evaluating Anonymous’s DDoS warfare.
The attacks were clearly public: Anonymous widely advertised the targets, the software to be used, and even the timeframe. Anyone could follow their deliberations in their online chat. They were conscientious in as much as they believed that companies like Amazon and Visa behaved in a cowardly fashion by pulling support from WikiLeaks and that politicians—especially Joe Lieberman and Sarah Palin—should not have exerted pressure on them without first establishing a strong legal case against WikiLeaks.
Did the attackers want to change policies and laws and not just cause mischief? I believe so. One of their goals was to prevent other companies from bowing down to undue political pressure. Another objective was to show the government that prosecuting Assange based on the contentious Espionage Act of 1917 would enrage many digerati.
Things get a little foggier when it comes to whether the attacks should be classified as “violent.” While the DDoS attacks may have caused some material damage to their targets, this alone seems like a poor indicator of “violence.” That the attacks cause congestion of infrastructure is a feature, not a bug: After all, if acts of civil disobedience did not disrupt the normal flow of affairs, they would hardly be “disobedient.” One could also plausibly argue that since DDoS attacks cause only temporary rather than permanent damage to the attacked servers, they are far less violent than most acts of physical vandalism.
I’d argue, however, that the DDoS attacks launched by Anonymous were not acts of civil disobedience because they failed one crucial test implicit in Rawls’ account: Most attackers were not willing to accept the legal consequences of their actions. This is the crucial difference between Anonymous and the civil rights movement. Those who participated in lunch counter sit-ins — purchasing nothing but cups of coffee and paralyzing restaurants by preventing other patrons from sitting down—knew what they were getting themselves into. They were violating an unjust law, and they knew that they would likely be arrested for it. Their faces could be photographed, their papers could be checked. The civil rights-era protesters knew that effective civil disobedience could not be carried out in complete anonymity; members of the Anonymouscollective have not grasped this yet.
How anonymous is Anonymous?While the FAQ for the collective’s preferred DDoS-launching software claims that those using it run a “zero” chance of arrest, Dutch security researchers have discovered (PDF) that the opposite is true: It’s actually very easy to trace all of its users, unless they take additional steps to “cover their tracks.” If those partaking in Anonymous attacks are cognizant of the fact that their online actions are fully traceable, this may mitigate the anonymity problem and make their actions far more legitimate than they are right now. Without such realization, their acts hardly qualify as civil disobedience and border on hooliganism. For what it’s worth, the announcement of Anonymous’ most-recent operation explicitly calls on its participants to use proxies in order to guard their anonymity—as such, they are clearly not seeking to conduct their politics in the open.
While Anonymous’ attacks fall short of Rawls’ high standard for civil disobedience, we should not prejudge all DDoS attacks to be illegitimate. Yes, DDoS tactics are increasingly abused to silence independent media—newspapers in Belarus, Kazakhstan, Lebanon, and Burma have all fallen victim to DDoS attacks in the last few years. Moreover, such attacks are often launched by relying on zombie computers whose unsuspecting owners have no clue they’re being enlisted as part of an attack. That’s unacceptable however one looks at it.
But should democratic societies really treat everyone who participates in a DDoS attack as a hardened criminal? (The British law, for example, punishes anyone who downloads such tools with up to 10 years in prison.)
Clearly, not all DDoS attacks carry the same moral weight; it all depends on who is attacking whom, as well as how and for what reason. The ethical spectrum here is quite wide: While it’s hard to imagine a situation where launching a DDoS attack on the Web site of the New York Times would ever be justifiable, it’s not so hard to imagine morally permissible attacks on the Web site of the Iranian government or alleged fraudsters like the proprietor of DecorMyEyes. In some situations, it may even be OK for attackers not to disclose their identities fully: Few of us get furious at the sight of Iranian protesters wearing green scarves to protect themselves from the prying eyes of police.
If done right, DDoS may offer the much-needed antidote to the shallow and sterile politics of most Facebook groups and petitions, where participants take no risks and make no sacrifices. Sure, there is always a risk that DDoS attacks will degenerate into acts of vigilante justice. But the same risk exists with any kind of real-world protest or demonstration. This is the price we pay for not living in a police state where there are no unscheduled events or provocations. DDoS, like all forms of protest, is messy. But there will always be certain times and places—even more so in our increasingly networked world—when the use of “DDoS justice” is warranted.
Anonymous Italy hacks Italian State Police
Anonymous Italy hacks and dumps thousands of files belonging to the Italian State Police.
Apparently hacktivists have had access to Italian State Police Servers for months, carefully combing through reams of information. Hacktivists claim they have gathered information concerning inappropriate conduct by police, covert operations, and files related to political dissidents.
In a post entitled AntiSecITA – Italian Police Owned & Exposed Anonymous Italy gives details of the hack and dump:
For weeks, we love to browse in your server, in your e-mail address, your portals, documents, records and much more.
We are in possession of a large amount of material, eg documents on interception systems, spreadsheets, bugs latest generation of covert activities, files related to the notav and dissidents; various circulars but also numerous e-mails, some of which demonstrate your dishonesty (eg a communication in which you explain how the weapon seized appropriarvi to a foreign man without committing the crime of receiving stolen goods).
The security level of your system, contrary to what we thought was really poor, and we take the opportunity to take our revenge. Is there any problem, Officer?
Reuters reports Anonymous Italy has leaked thousands of documents belonging to the Italian State Police. Information taken from state police servers and portals include police reports, mobile phone numbers, personal email, information on salaries, and soft-porn pictures.
4 Reasons Why Artificial Intelligence Fails in Automated Penetration Testing?
Why in the context of today’s maturity of Artificial Intelligence (AI) we cannot fully automate Penetration Testing?
Formal Modeling and Automation is one of the things I love. I try to model everything and sometimes modeling helps and sometime it lands me in trouble. It helped me when I tried to model Penetration Testing and worked with my co-founder to design our first version of automated Penetration Testing Tool. Where it did not help is in dancing. I think I am a poor dancer since my mind thinks modeling.
By the time I modeled, I missed the beat. I believe there are a few things which we need to do from heart and not from mind. I was thinking why in the context of today’s maturity of Artificial Intelligence (AI) we cannot fully automate Penetration Testing (or “maybe” we will never be able to).
Here are the top reasons that come to my mind. Multi Stage Attack Planning is a PSPACE Complete Problem In Penetration Testing, attack chaining becomes a critical element in terms of strategizing as well as executing some brilliant hacks. Human mind sometimes can compute some brilliant attack plans in just a jiffy. However, when we try to model this as a standard “AI Planning” problem, we get into a mess.
Every exploit/attack can be modeled as an action with precondition and post condition. So, the standard solution we can think of is to use “Planning Algorithms” to build the entire attack graph. However, the challenge is with state explosion and we will immediately run out of memory (PSPACE Complete Problem).
Though approximations can help, it can never find all the possible attack paths the moment the number of nodes increases beyond a threshold. However, when it comes to coverage, AI would definitely do better than humans (since humans get bored). Modeling Creativity using Artificial Intelligence is far-fetched Well, there had been some work in terms of Artificial Creativity.
Please indemnify the author if it doesn’t work. When it comes to designing some cool and creative attacks we still do not have any substantial algorithm to match the human creativity. Programs cannot Question the Assumptions Human minds can question the fundamental assumptions. However a program runs on fundamental assumptions. Einstein challenged the assumptions of Newton.
Heisenberg challenged the assumptions of Einstein and the game goes on. Any good pen tester/hacker challenges the assumption. When we broke Microsoft Bitlocker encryption we challenged the assumption of the coders that from userland BIOS memory cannot be accessed. A program does not have the capability to challenge the assumptions and that is a severe limitation when it comes to automating Penetration Testing. “Artificial Intuition” is still in early days Humans have intuition. As per wiki-
. We can sometime solve some brilliant problem without the use of any reasoning.
Artificial Intuition is there to model this but we are still in quite a primitive state to match what our brains can do. I am a big believer of AI and a bigger believer of the human mind. We did use some decent bit of AI to automate Penetration Testing.
While doing that I learn’t more of what we cannot do than what we can do. I am sure with time AI will get better but will we ever be able to do Penetration Testing without the humans?