We know PayPal is thinking about adding Bitcoin support to its payment service, but if you can’t wait to check the Bitcoin prices on those eBay auctions you’re sniping, website modding serviceBetterInternet has got you covered.
BetterInternet, which debuted at Techono.me last October, acts as a real-time filter for any website, allowing interested parties to develop modifications for their favorite sites. Co-founder Oded Golan said the team built the eBay/Bitcoin mod to get a sense of how volatile the crypto-currency is.
Sadly, the mod doesn’t make it possible to bid with Bitcoins, so I won’t be able to buy these hot pants anonymously just yet.
If you’re looking for a something a little more useful, check out this Flipboard layout version ofTIME Magazine and this Rotten Tomato/IMDB mashup.
First there was ecommerce, a term developed in the early '80s to abbreviate "electronic commerce," or sales made possible through electronic funds transfer (and later, the Internet). Since then, marketers have gleefully affixed various letters to the word "commerce" to describe sales (or the potential for sales) made through different platforms: m-commerce for mobile, f-commerce for Facebook and p-commerce, which I've discovered recently, is an abbreviation for both "participatory commerce" or "Pinterest commerce."
WHAT IS PARTICIPATORY COMMERCE?
A few quick Google searches reveal that the phrase participatory commerce was first coined in 2005 by Mark Pincus, founder and CEO of gaming juggernaut Zynga. It was popularized (to a degree) five years later, when entrepreneurs Daniel Gulati and Vivian Weng used it to describe the model for their new online retailing startup, FashionStake (acquired by Fab.com in January 2012).
Participatory commerce, according to Gulati, is a sales model that allows shoppers to participate in the design, selection or funding of the products they purchase. NIKEiD, for example, lets customers customize the colors and materials of Nike shoes. Online womenswear retailer ModCloth has a "Be the Buyer" program that invites shoppers to vote to determine what designs are sent into full-scale production, similar to a program FashionStake once offered. Users of Kickstarter are able to determine whether a product gets made by contributing to a funding goal. These types of features are frequently grouped under the definition of "social commerce," or s-commerce, as well.
A SECOND MEANING: PINTEREST COMMERCE
When Pinterest's popularity began to skyrocket in mid-2011, retailers were quick to recognize its potential as a sales driver, giving rise to the phrase "Pinterest commerce," which, like participatory commerce, has sometimes been abbreviated to "p-commerce."
Pinterest itself has not yet enabled selling on its site, but retailers continue to run tests to see if they can convert the network's more than 48 million users to customers. Some retailers, including Wayfair, have found that visitors from Pinterest are more likely to make a purchase and to spend more than those referred by other social networks, including Facebook and Twitter.
SHOULD YOU USE "P-COMMERCE"?
No. Not only will you sound like a tool, there's a good chance no one will know what you're talking about.
This Q&A is part of a weekly series of posts highlighting common questions encountered by technophiles and answered by users at Stack Exchange, a free, community-powered network of 100+ Q&A sites.
user396089 is more than competent when it comes to writing code in "bits and pieces." Planning and synthesizing that code into a complex, coherent app is the hard part. "So, my question is, how do I improve my design skills," he asks. And to that, some more experienced programmers answered...
See the original question here.
Be a bad designer for a bit
Amadeus Hein Answers (8 votes):
Well, there's no golden apple for this kind of question, and I feel perhaps this is for every coder himself to find what's right for him. Here's my take, anyway.
You could read books on the subject. Great books. Fantastic books. But I find that these books only help you once you've tried to build and design an application—and failed.
For me, it's all about experience. When I started as a rookie I read books on how to design. I didn't understand much of the content back then. When I started working and had to design applications myself, I made very messy applications. They worked, but they were a pain to maintain. Then I read those books again—and this time I better understood them.
Now, I continue making new mistakes and learning from the old ones.
Related: "What to plan before starting development on a project?"
Stick to some basic rules
Konrad Morawski Answers (4 votes):
Read about patterns, sure, but first and foremost read about anti-patterns. Recognizing anti-patterns is important, and it's easier to understand why something shouldn't be done in such a way than why it should.
See SourceMaking's post on anti-patterns, for example.
Write code so that it can be adjusted quickly if requirements changed (which is very common in production environment).
Be super-skeptical about adding "just one more little hack." One more here, one more there, and the code becomes unmaintanable.
Value the open/closed principle.
Write tests (as in TDD). They force you to think your design through even before you actually implement it.
Browse the code of open source projects (reasonably sized ones, that is). I used to be surprised at—usually—seeing so many levels of abstraction. Now I understand it's not art for art's sake, there's a reason why it's done this way.
Decompose it
Giorgio Answers (3 votes):
One principle that I find very important for good design is decomposition:
If a class is too big (more than, say, 300-400 lines of code) break it up into smaller classes.
If a method is too big (say, more than 50 lines of code) decompose it; if a project contains more than 50 classes, decompose it.
The key is to estimate the size of your system and construct several abstraction layers (e.g. subsystem, application, project, module, class, method) that allow you to decompose your code into understandable units with clear relationships between them and few dependencies.
Forget about design
kevin cline Answers (2 votes):
Stop designing and learn to refactor code. Incremental development with continuous and aggressive refactoring will result in a much cleaner end product than any up-front design.
Find more answers or leave your own at the original post. See more Q&As like this at Programmers, a site for conceptual programming questions at Stack Exchange.
Security analysts have detected an ongoing attack that uses a huge number of computers from across the Internet to commandeer servers that run the WordPress blogging application.
The unknown people behind the highly distributed attack are using more than 90,000 IP addresses to brute-force crack administrative credentials of vulnerable WordPress systems, researchers from at least three Web hosting services reported. At least one company warned that the attackers may be in the process of building a "botnet" of infected computers that's vastly stronger and more destructive than those available today. That's because the servers have bandwidth connections that are typically tens, hundreds, or even thousands of times faster than botnets made of infected machines in homes and small businesses.
"These larger machines can cause much more damage in DDoS [distributed denial-of-service] attacks because the servers have large network connections and are capable of generating significant amounts of traffic," Matthew Prince, CEO of content delivery network CloudFlare, wrote in a blog postdescribing the attacks.
It's not the first time researchers have raised the specter of a super botnet with potentially dire consequences for the Internet. In October, they revealed that highly debilitating DDoS attacks on six of the biggest US banks used compromised Web servers to flood their targets with above-average amounts of Internet traffic. The botnet came to be known as the itsoknoproblembro or Brobot, names that came from a relatively new attack tool kit some of the infected machines ran. If typical botnets used in DDoS attacks were the network equivalent of tens of thousands of garden hoses trained on a target, the Brobot machines were akin to hundreds of fire hoses. Despite their smaller number, they were nonetheless able to inflict more damage because of their bigger capacity.
There's already evidence that some of the commandeered WordPress websites are being abused in a similar fashion. A blog post published Friday by someone from Web host ResellerClub said the company's systems running that platform are also under an "ongoing and highly distributed global attack."
"To give you a little history, we recently heard from a major law enforcement agency about a massive attack on US financial institutions originating from our servers," the blog post reported. "We did a detailed analysis of the attack pattern and found out that most of the attack was originating from [content management systems] (mostly WordPress). Further analysis revealed that the admin accounts had been compromised (in one form or the other) and malicious scripts were uploaded into the directories."
The blog post continued:
"Today, this attack is happening at a global level and WordPress instances across hosting providers are being targeted. Since the attack is highly distributed in nature (most of the IPs used are spoofed), it is making it difficult for us to block all malicious data."
According to CloudFlare's Prince, the distributed attacks are attempting to brute force the administrative portals of WordPress servers, employing the username "admin" and 1,000 or so common passwords. He said the attacks are coming from tens of thousands of unique IP addresses, an assessment that squares with the finding of more than 90,000 IP addresses hitting WordPress machines hosted by HostGator.
"At this moment, we highly recommend you log into any WordPress installation you have and change the password to something that meets the security requirements specified on the WordPress websitethe company's Sean Valant wrote. "These requirements are fairly typical of a secure password: upper and lowercase letters, at least eight characters long, and including 'special' characters (^%$#@*)."
Operators of WordPress sites can take other measures too, including installing plugins such as this one and this one, which close some of the holes most frequently exploited in these types of attacks. Beyond that, operators can sign up for a free plan from CloudFlare that automatically blocks login attempts that bear the signature of the brute-force attack.
Already, HostGator has indicated that the strain of this mass attack is causing huge strains on websites, which come to a crawl or go down altogether. There are also indications that once a WordPress installation is infected it's equipped with a backdoor so that attackers can maintain control even after the compromised administrative credentials have been changed. In some respects, the WordPress attacks resemble the mass compromise of machines running the Apache Web server, which Ars chronicled 10 days ago.
With so much at stake, readers who run WordPress sites are strongly advised to lock down their servers immediately. The effort may not only protect the security of the individual site. It could help safeguard the Internet as a whole.
Curious how to configure access to a VPN client on your iPhone, iPod touch, or iPad? L2TP, PPTP, and IPSec VPN support are all built right into iOS and we'll show you how to set them up after the break!
One thing to note is that you'll need to make sure you have all the settings and information about your VPN service handy. Some companies will not allow mobile access to a VPN, and some carriers limit 3G VPN access to business accounts, so that's another thing you'll need to check on before attempting to add one.
These screenshots show iPhone setup but iPod touch and iPad are similar.
First you'll need to configure your VPN by adding the settings for your VPN (in most cases, these are provided by the system administrator or IT person at your company).
To configure your VPN, do the follow:
Tap Settings
Tap General
Tap Network
Tap VPN
Tap Add VPN Configuration
Along the top you will see some tabs, you'll need to select which type of configuration you need. In most cases, you'll choose IPSec (unless your system administrator has told you different). For this example, I've used IPSec.
Enter the information for your VPN in the corresponding fields. If you used a proxy, make sure to enable it towards the bottom of the settings page.
Tap Save
You've now configured your VPN for use. Now you'll need to turn it on.
From your homescreen, tap Settings
Under the main settings page, you'll now see a VPN option, this only appears when you have a VPN configured. Toggle the switch to On
Your phone should connect to the VPN. If an error message pops up, go back into your VPN settings and make sure all your settings are input correctly.
That's it! If you guys have ever used this, let us know your input too. Have any issues setting up a VPN? Check out our TiPb Forums to get helpful from many of our awesome community members.
Tips of the day will range from beginner-level 101 to advanced-level ninjary. If you already know this tip, keep the link handy as a quick way to help a friend. If you have a tip of your own you'd like to suggest, add them to the comments or send them in to . (If it's especially awesome and previously unknown to us, we'll even give ya a reward...)
If you looking for affordable VPN for iPhone, click here to visit HMA VPN. It costs $7/month only with unlimited usage.
Setting up VPN for iPhone is very easy. Apple iPhone come with VPN support that you can connect VPN in iPhone without any difficulties. Here list the simple steps to make the connection:
Sign up an account with VPN provider. I am using HMA VPN at $7/month and the account can be shared with my iPad, Mac and Apple TV.
HMA should give you login information like IP address, user and password. Make sure the account is L2TP, PPTP or IPSec. OpenVPN is not supported in iPhone
On your iPhone, touch the Settings app, select General > Network > VPN > Add VPN Configuration
Enter the information you got from HMA
Now the basic configuration is done. Now you have the option to turn on/off VPN when needed. Make sure you see the VPN on top if you are surfing secured information in public environment.
What to do with VPN for iPhone?
You may watch Netflix on iPhone outside US when you connect to VPN server in United States
You may make secured but cheap overseas call with Skype over VPN
You may unblock access to Facebook and Twitter if you are physically located in China.
You may even listen to Last.fm or Pandora when you are travelling aboard
With Austin, Texas, expected to be named as the next city for the Google Fiber project, possibly as soon as tomorrow, the analysts at Bernstein Research have published some estimates on how the economics are shaping up for the only place where Google has built out services so far — Kansas City. The firm also sounds a note of caution about whether the search giant will ever embark on a nationwide effort: it could cost up to $11 billion to build out gigabit Internet and TV service to another 20 million homes to achieve a medium-to-large rollout to compete with other providers.
As Ryan pointed out last year when Google first unveiled the details of the Kansas City project, there are a couple of big hurdles to getting a new broadband service off the ground, starting first with building out infrastructure, and then connecting it. Bernstein’s Carlos Kirjner and Ram Parameswaran now put a price tag on that: They say it will cost $84 million to pass (but not actually connect) 149,000 homes — Google’s first phase of buildout for Kanasas City. Some $38 million will go into Kansas City, Kan., and $46 million into Kansas City, Mo., with the cost per home respectively at $674 and $500.
Connecting those homes is another matter. Google has detailed three service tiers, offering a variety of speeds and payment plans, including one with seven years of free service (after you play a $300 installation fee):
Bernstein estimates that to connect up a broadband-only service, it will cost Google $464; those taking double-play of broadband and pay-TV services will cost $794 to connect. “To reduce labor costs, Google will connect homes in waves within each neighborhood, taking advantage of the pre-subscription process it ran asking customers to express interest in its services as it deployed the network,” Bernstein writes. That first wave, of 12,000 homes on “day one” of the service equates to an 8 percent penetration and will cost an additional $10 million for Google, making for a total cost of $94 million for the Kansas City project — $42 million in Kansas and $52 million in Missouri.
As the project gains some momentum, the revenues incoming from new customers will offset the costs of more growth. Based on 8 percent penetration of homes passed on day one, Bernstein writes, “the incremental cash investment to grow to 18% penetration in the first year will be of approximately $2 million, with $15 million in incremental cash costs offset by $13 million of contribution from users.” Bernstein estimates that double-play customers will bring in $64/month and broadband-only customers will bring in $47/month.
Kirjner and Parameswaran write that a wider rollout around Kansas City for 300,000 homes would more than double build-out costs to $170 million (before acquiring customers and connecting those homes). Austin, meanwhile — expected to be announced tomorrow — could end up costing either the same amount or less to build out, since “preliminary analysis suggests that Austin’s population density is materially higher than Kansas City’s and hence could yield a cost per home passed (and hence total cost to pass) that is lower or similar to KC’s even if the network build-out requires a larger portion of (usually more expensive to deploy) buried or underground infrastructure.”
Yet played out on a wider scale, Bernstein is more skeptical:
“We remain skeptical that Google will find a scalable and economically feasible model to extend its build out to a large portion of the US, as costs would be substantial, regulatory and competitive barriers material, and in the end the effort would have limited impact on the global trajectory of the business.”
Using existing providers as a point of comparison (Comcast passes 53 million homes, Time Warner Cable 30 million, and Charter 12 million), Bernstein works out the cost for a 20 million-home coverage for Google, which would give it a 15 percent coverage and rank it as a “medium-to-large domestic access and pay-TV provider.”
Kirjner and Parameswaran estimate that if Google built out a fiber network to serve 20 million homes over a period of five years, “the annual capex investment is required to be in the order of $11 billion to pass the homes, before acquiring or connecting a single customer.” There is a big question mark over why, in fact, Google would ever embark on such a project. “It would have limited impact on the global broadband access industry beyond these 20 million homes,” Bernstein writes. As a point of comparison, it was estimated that it cost Verizon, before it halted FiOS buildout, about $4,000 per home to connect it to its fiber network.
Then again, the Kansas City project, covering neighborhoods in both Kansas and Missouri, has always appeared to have a two-fold purpose. It is a test for Google to see whether it could be a viable infrastructure-based service provider; and it is a way for Google to test out new applications and services, and to amass more data about consumer behavior.
Bernstein has weighed in with a pretty definitive opinion on the first of these (and not for the first time, we should add), but that still leaves questions about the second motivation for Google Fiber.
There is a very clear opportunity for Google to cherry pick a handful of cities for fiber rollouts for in-the-wild tests. These could then subsequently get launched on other service providers’ networks, and provide more opportunities for Google to package its advertising and content in more ways.
Remember, Google’s longer-term vision of how content gets consumed extends across multiple screens. While it’s setting out a winning stake on the mobile screen, by virtue of Android, and dominates a lot of activity on the PC screen, by way of its search engine, its Chrome browser and its many cloud services, it’s relatively far behind the game on what’s still a main screen in the home, the television.
In that regard, it may be that Google never planned for something as costly and extensive as a 20-city, $11 billion investment. A far smaller one in the hundreds of millions, however, could end up being money well spent.
Acer is looking to take a bite out of the tablet market share with its 7.9-inch tablet. Thanks to French online retailer Rue du Commerce, the tablet is ready for pre-order at €199. However, according to sources from the supply chain, Acer is considering lowering the price of its Iconia A1-810 tablet.
To compete with Apple mini, Nexus 7 and Amazon's Kindle Fire HD, the company would take the tablet from $249.99 to $149.99. However, Acer declined to comment about the rumors. The device should be available by the end of this month.
Microsoft rolled out its third software update for Surface, Surface Pro, and Surface RT, focused on fixing WiFi connectivity issues. The three tablets are hit by numerous WiFi glitches that include connection drops, untimely DHCP lease expiry without renewal (the so called "Limited Connectivity" bug), and software crashes that result from these connectivity drops.
Dated April 9, the newest software update for the three Surface tablets, in the words of Microsoft, addresses the following for Surface RT:
Certain "Limited Connectivity" issues resolved
Improves WiFi to handle a wide range of access points.
Resolves system crashes caused by certain WiFi issues.
The update for Surface Pro has its own share of issues addressed:
Resolves an issue with on screen touch navigation in the UEFI boot menu.
Resolves some Surface Type and Touch cover connectivity issues.
Support for 106/109 keyboards on North American Surface devices.
Resolves an issue where toggling airplane mode would disable the WiFi driver.
Users can either wait for the next automatic updates look-up, or navigate to Settings > Change PC Settings > Windows Update, and manually check for updates.
This is the third batch of "cumulative updates" for Surface this year, addressing WiFi connectivity issues, the previous being February and March.
Sure, those fancy new 802.11ac routers are wicked fast, but the IEEE isn’t expected to ratify that standard until later this year. So today’s 802.11ac hardware could be rendered obsolete if the standards body changes course between now and November.
That probably won’t happen, but if you value interoperability assurances more than raw speed—for instance, if you’re buying networking equipment for your small business—you’ll want to stick with products based on the tried-and-true 802.11n standard. Here’s a look at five of the best routers in that category.
You might recall that the first 802.11n routers hit the market in advance of the IEEE’s final ratification of that standard. But there’s a key difference: Back then, the Wi-Fi Alliance ran a certification program that not only assured consumers that all 802.11 Draft N equipment bearing the Wi-Fi logo would operate together, but that those devices would also be compatible with the final 802.11n standard. The Wi-Fi Alliance is not operating such a program for gear based on the 802.11ac draft standard, so you’re on your own.
What the jargon means
Each of the routers in this roundup is a so-called N900 model, meaning it supports three 150-megabits-per-second spatial streams on the 2.4GHz frequency band, and three 150-mbps spatial streams on the 5GHz frequency band. That’s 450 mbps in total for each band. Multiply that by two and you get 900. You should be aware, however, that none of these routers will actually deliver 900 mbps of data throughput—N900 is just a label.
Each frequency band is subdivided into channels. The 2.4GHz band has 11 available channels (in North America, that is). Each channel is 20MHz wide, but only three of the 11 channels don’t overlap one another. To reach 450 mbps, the router must bond two of the 11 channels together to form one that’s 40MHz wide. But in order to avoid a situation where one router consumes all the available bandwidth in its vicinity, a router is supposed to abide by what’s known as a “good neighbor” policy: It should not bond two of the non-overlapping channels, and it should engage in channel bonding only if it doesn’t detect any nearby routers operating in the same spectrum. As a result, it’s almost impossible for a 2.4GHz router to achieve channel bonding in a city environment, because so many other devices operate in this same spectrum (everything from microwave ovens to cordless phones, not to mention other routers).
There are many more channels available on the 5GHz frequency band, so crowding is much less of a problem. As a result, a router operating in the 5GHz spectrum can more readily engage in channel bonding without interfering with other routers operating nearby. The downside of operating a router on the 5GHz band is that it will deliver less range, because the smaller wavelengths are more readily absorbed by walls and other solid objects in the signal path.
You might be wondering why 802.11ac routers, which operate exclusively in the 5GHz band, don’t have the same problem with range. One reason is that the 802.11ac standard uses channels with more bandwidth (each channel is 80MHz wide, versus the 20MHz-wide channels that 802.11n routers use). Another is that the 802.11ac standard uses a higher-density signal modulation scheme (256 QAM versus the 64 QAM that 802.11n routers use).
Common features of 802.11n routers
Cloud services Some routers allow you to administer them from an Internet connection. More-advanced models let you access storage attached to the router from the cloud, and a few even give you the power to access PCs connected to the router from the cloud.
Guest network A guest network allows you to establish a separate wireless network that your guests can use. It allows them to access the Internet, but prevents them from accessing other PCs or storage devices operating on your network. A guest network is a courtesy that small-business owners can offer their customers, and that individuals and families can offer their visitors.
Media servers Any router should be equipped with a UPnP (Universal Plug-n-Play) server at a minimum. This enables the router to stream music and video to client devices on the network. More-advanced routers will also offer a DLNA-certified streamer (the acronym stands for Digital Living Network Alliance, and the standard has been embraced by nearly every company in the consumer-electronics industry). If you’re an iTunes fan, you’ll appreciate having a router with an iTunes server.
Parental controls This feature is ostensibly designed for moms and dads who want to shield their children from seamier side of the Web, but small-business owners might also find it useful when deployed in moderation. It enables an administrator to establish rules as to when individual computers can access the Internet, and it can block particular websites or even entire categories of sites.
Quality of Service If you depend on your router for VoIP (Voice over Internet Protocol), frequently stream music and video around your network, or play online games, you’ll want a router with good QoS (Quality of Service) features. Most high-end routers have at least the basics, meaning that the router can analyze traffic moving through the network and distinguish between lag-sensitive packets (VoIP and media streams, for instance) and non-lag-sensitive packets (such as file downloads). The router will assign higher priority to the former, to prevent dropouts, and lower priority to the latter (because any dropped packets can be re-sent with little impact). A more advanced router will allow you to customize its QoS settings or even allow you to write your own rules.
USB ports High-end routers typically sport one or two USB ports. You can use one port to share a printer and the other to share storage—in the form of a USB hard drive—between all the computers on your network. Some router manufacturers are taking the step of building a hard drive into the router itself. If you need fast storage, however, you’ll be better served by a dedicated NAS box.
What good is a secure password program if you can't get access to your data when and where you need it?
Using a password manager application to automatically log into Web sites -- and to secure and manage all of your user IDs and passwords -- is a great help in organizing your digital life. But most password managers simply save your data in an encrypted file and then leave it stranded on one computer.
That doesn't work if you have a Windows desktop at work, a Mac or Linux machine at home, an iPad in your family room and an Android phone in your jacket. You need secure access to your data from any device, at any time, whether you're online or offline. And you don't want to have to manually update several work, home and mobile password databases every time you change an account's credentials -- something I've been doing for years.
The makers of an emerging breed of password managers are striving to provide secure online access to your passwords in the cloud and give you a synchronized, local copy of your password database on every computer and mobile device, no matter what operating systems, browsers or mobile platforms you use. (Having a synchronized local copy means you don't have to worry if the password database in the cloud goes down -- or the vendor suddenly disappears.)
In Video: How to Retrieve a Lost Windows Password
For this roundup, I looked at four products in this category: Agile Web Solutions' 1Password, Clipperz from Clipperz SRL, LastPass from the company of the same name and RoboForm from Siber Systems Inc. I tested each on four different platforms: a MacBook Pro running OS X 10.5.8, laptops running Windows 7 and Windows XP, and an iPad. I also tested browser add-ons for Internet Explorer, Firefox and Chrome.
Keeping passwords secure
All four applications work by having your computer encrypt passwords and other personal data before uploading a copy to the cloud. Because the data has been encrypted locally, the vendor does not have the key to unlock the data stored in the cloud: Only you do.
You secure your password database by creating a user account name and a master password. Once you're logged in, the applications automate the process of gathering user IDs, passwords and other information as you visit each Web site. They can then automatically fill in and submit your log-in credentials each time you return to those sites.
LastPass, RoboForm and 1Password can also fill in forms using data stored in profiles. You can create "identities" that have access only to subsets of your password data (such as work-related information, personal data or data for systems shared by you and your spouse), and you can store other types of sensitive data, from locker combinations to safe deposit box numbers. The way in which Clipperz supports forms is a little more involved, requiring the use of bookmarklets and mapping fields into what Clipperz calls Direct Login links.
The local versions of these products rely on any or all of three different technologies to do the job: Native applications designed to run on each operating system, extensions and plug-ins for popular browsers, and bookmarklets that can run on any browser that supports JavaScript. Not all products support a locally cached copy of your data on every device. In some cases, a product supports a local cache on one platform but not another; others support a local copy, but it's read-only.
Support for mobile devices is more limited. On some mobile devices, such as Apple'siPhone or Android-based phones, the password management application may include a simple, stand-alone browser when it can't integrate tightly with the native browser for the device. On some platforms, some products may lack the ability to maintain a synchronized, local copy of your data.
As a category, these products are still evolving. Once you figure out the best way to work with them, however, they make securing and accessing your passwords from any device, at any time, convenient and easy.
Symantec’s 2013 edition of Norton Internet Security ($50 for one year and three PCs, as of 12/19/12) is a solid performer with a polished, touch-optimized user interface. This security suite didn’t totally dominate its competitors, but it did completely block, detect, and disable all malware in our real-world tests, and it performed well enough overall to snag second place in our roundup.
Norton’s excellent showing in our real-world attack test indicates that it should be effective at blocking brand-new malware attacks as it encounters them in the wild. As noted in the F-Secure review, of the security suites we tested, four others were also successful at completely blocking 100 percent of attacks: Bitdefender, F-Secure, G Data, and Trend Micro.
Norton produced stellar—though not absolutely perfect—results in detecting known malware. In our malware-zoo detection test, the program successfully detected 99.8 percent of known malware samples. Norton Internet Security also put up a perfect score in our false-positive test: It didn’t mistakenly identify any safe files, out of more than 250,000, as being malicious.
Norton does an acceptable job of cleaning up a system that has already been infected, but it missed some infections completely in our evaluation. In our system cleanup test, the program detected and disabled 90 percent of infections, and completely cleaned up 60 percent of infections. This is a decent but not fantastic showing—seven of our tested suites detected and disabled 100 percent of infections, and six cleaned up all traces of infection at least 70 percent of the time.
On the other hand, Norton Internet Security is a relatively lightweight program that won’t bog down your system. It added about half a second to startup time (compared to a PC that had no antivirus program installed), and also added 3 seconds to shutdown time; in all of our other speed tests, it was faster than average. Norton is faster than average when it comes to scanning speeds, as well. The package required just 1 minute, 19 seconds to complete an on-demand (manual) scan, and 2 minutes, 55 seconds to complete an on-access scan—both are times that represent better-than-average results.
Norton’s interface is very polished and simple, and the program installs with just one click. The main window has tilelike buttons, which look designed to work well with Windows 8 touch systems. You’ll find four tiles on the main screen: a tile that shows your protection status, along with information about your CPU usage; a ‘Scan Now’ tile; a LiveUpdate tile (which you’d use to install any updates to the suite); and a tile for advanced settings. You can also access the settings via the Settings tab, which is located at the top of the screen.
The settings menu is relatively easy to navigate, though it has more options than a beginning user really needs. Still, Norton does a good job of explaining different features and toggles, and a little help button (which takes you to Norton’s online support site) is always located next to confusing terms.
The 2013 version of Norton Internet Security is definitely worth a look, especially if you’re a Windows 8 user.
You don't necessarily need a touchscreen monitor to use Windows 8, but swiping your finger to invoke the Charms bar is a lot more fun than holding down the Windows key and pressing C. I'll admit that initially I had to force myself to use the new touch gestures, but after a short time with the OS I found myself reaching out to touch even my MacBook Pro's screen.
Since the debut of Microsoft's latest operating system, monitor manufacturers have been working to bring touchscreen support to desktop users. At first it was near impossible to find a display that responded to all the gestures in Windows 8, but now we're able to review three 23-inch models with 10-point multitouch support (meaning the monitors recognize all 10 fingers on both your hands).
We put the three monitors through a gauntlet of tests to find which one offers the best value, quality and feature set to win a spot in your workstation.
Acer T232HL
Acer's T232HL is a 23-inch, 10-point touchscreen LCD monitor with a resolution of 1920 by 1080. It uses environmentally friendly LED backlighting and a high-quality IPS panel for wide viewing angles.
The T232HL offers VGA, HDMI, and DVI inputs. Acer thoughtfully includes cables for each connection type in the shipping box, but we still found the initial setup to be a bit tricky. The T232HL’s stand uses a hinged design that lies flat against the display for shipping. It took a lot of force (and courage) to open the stand, but eventually we were able to pull it into position. A note in the setup guide would go a long way to alleviate fears of snapping the base off your newly purchased monitor. The stand doesn’t allow for height adjustment, pivot, or swivel, but it does tilt back to a 45-degree angle very easily, once you’ve set it up.
Acer's T232HL touchscreen display has the edgiest design aesthetic of the three models we reviewed.
We connected the display via HDMI, and our test PC recognized it automatically as a Windows touchscreen device, booting directly into its native resolution without issue. The T232HL delivered impressive performance as we ran the display through our battery of test images. On our solid-color screens, we found no stuck or dead pixels, and color and brightness were uniform across the screen. Its viewing angle was top notch, losing contrast only at extreme angles. Its glossy surface, which can be problematic in terms of glare, helps to enhance the appearance of photographs. Even gray tones appeared neutral at its default color settings.
While no one would confuse the T232HL with a Retina display, text was legible even at small point sizes. We also watched test clips of HD video, and the action played smoothly without any obvious artifacts. The built-in speakers are okay, if a little tinny. The speakers are located in the back of the display, and they sound like it.
As for touchscreen performance, the T232HL was responsive and accurate. We didn’t have any issues using the gestures, closing open windows, or selecting menu items. The stand’s ability to lean back to a 45-degree angle made the touch features—especially the on-screen keyboard—easier to use for extended periods of time.
Despite a little trouble in our initial setup, the Acer T232HL is a nice display that uses high-quality components. It performed admirably in our text, motion, color, and uniformity tests, and it would definitely be worth considering even without its 10-point touch capabilities.
Acer T232HL, $549 (street price)
Pros:
10-point touchscreen
Wide viewing angle
Accurate colors
Smooth gradients
Cons:
Stand is difficult to set up and offers no height adjustment
Bottom line:
This is a terrific display, but we do wish it allowed height adjustments.
4 stars
LG Flatron 23ET83V-W
The LG Flatron 23ET83V-W is based on a high-quality IPS panel with 1920 by 1080 resolution and an LED backlight. Sporting a white plastic case with a thin black bezel, this glossy-screened monitor connects to your PC via HDMI or VGA.
LGLG's Flatron 23ET83V-W doesn't offer the great off-axis viewing experience we've come to expect from IPS displays.
A red light illuminates a thin, translucent, crescent-shaped plastic tab on the bottom edge of the screen that reflects off the desktop. The tab is not a button, but it sits just below the buttonless touch power control. Although the absence of physical buttons might make for a cleaner, simpler-looking design, we prefer the tactile response of a button. Maybe we’d get used to the menu system over time, but we found ourselves frequently hitting the wrong buttons and having to exit and reenter the menus.
Aside from a few degrees of tilt, the LG display’s stand offers little ergonomic flexibility. You can’t adjust the height, pivot it into portrait mode, or swivel the screen from left to right. Other touchscreen monitors we’ve looked at can lean back farther, making it easier to use touch gestures without having shoulder fatigue setting in immediately.
A quick note about the setup: When attached to an AMD graphics card, the display would boot up underscanned, with about an inch of black space around the screen. The LG’s on-screen menus have an Overscan setting, but turning that from its default off position to on did not fully correct the problem. We had to turn off overscanning on the display and then go to the AMD Catalyst Control panel’s advanced settings and move the overscanning slider to zero. In addition to the unwanted space, the screen was blurry in this underscanned mode, and that affected calibration. When we attached the monitor to a system with an Nvidia-based graphics card, the proper resolution came up automatically and the image filled the screen as expected.
Once we had the screen properly set up, the 23ET83V-W performed well in most of our image tests. We found no stuck or dead pixels, and colors were uniform. Text was legible even at small point sizes, and photographs looked good, although making out details in shadowy areas of the image was hard. In some of the DisplayMate gray-level test patterns, we were unable to see differences in the first few gray-level patches. Switching the display’s Black Level control setting from its default Low to High resolved the issue.
The LG’s viewing angle wasn’t quite as stellar as that of other IPS screens we’ve seen. Color shifts weren’t an issue, but at extreme angles it was harder to see what was on screen. This minor drawback is probably attributable to the touchscreen coating.
Speaking of the touchscreen, the 23ET83V-W performed admirably in that regard. It was responsive and accurate, and we had no problems using Windows 8 touch gestures or closing windows and choosing menu items on the Windows 8 desktop.
The LG Flatron 23ET83V-W is a capable touchscreen display. Its viewing angle isn’t as wide as that of most IPS screens we’ve tested, but is still very good. Its controls were a bit of a hassle to use, and we needed to make adjustments to the black-level settings to help the display look its best. While those are admittedly minor grievances, the monitor’s lack of ergonomic agility could reduce the amount of time you end up using its touch capabilities.
LG Flatron 23ET83V-W, $550 (street price)
Pros:
10-point touch
IPS panel
LED backlight
Cons:
Limited ergonomic flexibility
Black levels require adjustment
Bottom line:
This is a very good display, but its controls are more difficult to use than they should be, and its viewing angles aren't as good as other IPS monitors we've evaluated.
3.5 stars
Viewsonic TD2340
Viewsonic’s TD2340 display is built like a tank, weighing a hefty 20.4 pounds. It features a 23-inch, LED-backlit, IPS panel that delivers a resolution of 1920 by 1080 pixels, and it supports 10 touch points.
The TD2340 has a heavy-duty, dual-hinged base that offers a few inches of height adjustment, the ability to pivot into portrait mode, and even the option to tilt the display down so that it sits completely flat like a tabletop. At a height of 6 inches above the desk, however, the flat orientation seems like an awkward way to work. The best position we found for typing directly on screen using the touch keyboard was tilting the panel back to a 45-degree angle and lifting the bottom-front edge a couple of inches off the desk. This position allowed us to type on the screen without reaching as far, while still being able to keep our physical keyboard and mouse in front of us.
VIEWSONICYou'd never guess by the ViewSonic's TD2340 Frankenstein feet just how limber this display can be.
You can connect the TD2340 to your computer via HDMI, DisplayPort, or VGA. To use the touch capabilities, you need a USB connection as well. The on-screen controls are simple and easy to use, which we find refreshing. Button 1 brings up the menus, while button 2 selects with up and down arrows for adjusting color, brightness, contrast, volume from the SRS speakers, the on-screen menu position, and much more.
The TD2340 offers a wide viewing angle, which is helpful if you collaborate with other people around your screen or if you take advantage of the aforementioned flexible stand to position the screen at nonstandard angles. Text was legible even at small point sizes, and colors were uniform across the screen. We found no stuck or dead pixels when testing the display. The glossy screen helps to give photographs more depth, but glare can be an issue. You'll need to consider where, in relation to windows or other fixed light sources, to position a glossy-screen display like the TD2340.
The Viewsonic’s 10-point touch capabilities were impressive. Input was responsive and accurate, and we didn’t have any issues using Windows 8 gestures or maneuvering around the Windows 8 desktop.
Of the Windows 8 touchscreen monitors we’ve evaluated, the Viewsonic TD2340 is the most capable. Its wide viewing angle, its agile yet bulky stand, and numerous little touches such as the SRS speakers, multiple inputs, and easy-to-use menus combine to make the TD2340 a great choice for Windows 8 users. It’s more expensive than some other touchscreen displays, but it earns its price tag.
Viewsonic TD2340, $600 (street price)
Pros:
10-point touchscreen
Wide viewing angles
Versatile stand
DisplayPort
Cons:
Not very attractive
Bulky and heavy
Bottom line:
ViewSonic's TD2340 is more expensive than other monitors in its class, but it delivers enough features and value to warrant the difference.