Recently, I was given the opportunity to become part of the Google Glass Explorer program. As a “tech guy”, I was very excited for the opportunity try out these new augmented reality glasses.
There are only a few complaints I have about Glass. First, there aren’t many applications available right now besides the ones Google provides. I’m quickly working on fixing this problem, but if you pull Glass out of the box and start using it today then you shouldn’t get your hopes up. I’ve also let a few friends try Glass and those with actual glasses have had difficulty using Glass. They end up taking their glasses off so they can see Glass but then can’t see anything else. It is my understanding that Google is actively working to get Glass working with prescription glasses so hopefully that will improve the Glass experience for those people.
Being a software engineer, I decided that while I couldn’t solve the prescription glasses problem I could work to fix the lack of applications available for them. One of the first things I wanted to do when I got my pair of Google Glasses is to actually write code for them. Given my role as systems admin for certain projects, I decided to integrate our server monitoring service (shout-out to CopperEgg) with Glass so I could receive alerts right to my Glass instead of receiving them via email on my phone. After a quick read through the CopperEgg API, I quickly added my custom URL as a push notification when any alert triggers. Once I had my website set up to receive these notifications, I then had to actually do something with them. Another quick read through the Google Mirror documentation and I identified the APIs made available to send cards (Google’s fancy name a timeline item) directly to a person’s Glass. A few hours later (mostly spent banging my head against the keyboard as I tried to get Google’s permission system to actually send Glass cards to my account) I finally had a working implementation. Over the past few days I’ve tested the alerts and am very pleased with the notifications on Glass.
So where do we go from here? I’m currently working on extending the CopperEgg monitoring to include some of the other metrics made available to us (like being able to cycle through all your server stats which update every XX minutes) and extending it for other notification platforms such as Rackspace. I’m also working on a full-fledge piece of Glassware (Glass app) to make full use of Glass’s accelerometers and other functionality.
It seems like just yesterday we traveled cross country to the little town of Portland for the annual U.S. Drupalcon. As we left Baltimore on Sunday morning, we could tell we weren’t the only ones going out to Drupalcon. There were quite a few travelers on the plane with “nice nodes” T-shirts or carrying enough technology with them to run what seemed like a mobile production studio. As even more Drupal devs joined our flight in Salt Lake City, I knew I was going to the right place.
One of the things I like about Drupal events is their “Birds of a Feather” (BOF) sessions. For those that don’t know, on the first day of the conference the organizers put out a blank white board divided into a grid by room number and time. Once the white board is put up, it becomes a free for all to sign up for a session that you would like to talk about. These sessions are informal and relatively small, only 20-40 people, but let attendees discuss things that don’t have a designated speaker during the regular conference sessions. On Wednesday, I co-hosted a BOF session about MongoDB.
At Mindgrub, we use MongoDB in Drupal because of the performance enhancements it provides. MongoDB is a NoSQL database providing near instant read/write times when queried. If you have a content type with a lot of fields, you should consider using MongoDB because it will store all those fields on the node object, rather than in separate tables. As we all sat around discussing the advantages and disadvantages of MongoDB, we heard stories from the developers of Examiner.com and one of the lead developers at 10Gen, the makers of MongoDB.
Both 10Gen, and the development team behind Examiner had interesting stories to tell about the inception of MongoDB into the Drupal world. We heard stories ranging from incredible ease and success to days of banging heads against the wall because of performance issues. 10Gen was very open about what MongoDB was good for in Drupal and where things still needed work. At the end of the session, I think we convinced quite a few of the other attendees to at least consider MongoDB for their next Drupal project.
Some of the pros of using MongoDB are:
- It is fast. Really fast.
- You avoid having to join lots of tables together when loading a node
- Database schemas become very easy to adjust. Need to store a new piece of data on a node, just save it with the rest of the data
- Community Support
- Handles big data very well, especially if that data has a lot of variation in structure
In the interest of full disclosure, some of the cons of MongoDB are:
- A non-relational database makes doing relationships difficult. I know this sounds obvious but most people tend to forget it until they try and do a join on an Entity Field Reference.
- Views integration is provided, but there are some limitations.
- There are some things in Drupal that don’t play nicely with MongoDB. For example, saving the weights of taxonomy terms in a vocabulary is done using direct database queries which means the MongoDB module can’t intercept them and save them properly. I worked around this by adding my own submit handler and rewriting that code to save the weight in MongoDB.
In the end, MongoDB is a wise choice for you if you have a lot of data or data with a lot of variation. If you are making a small, static site I would not recommend MongoDB for your project because it is overkill for your project.
Are you using MongoDB or have a question about its integration with Drupal? I would love to hear your success (or horror) stories and will gladly answer any questions you have.
- According to Fox News, almost every website in existence has approximately 56 security flaws waiting to be exploited. Additionally, the average response time to fix one of these vulnerabilities is 193 days.
- How safe are the websites that advertise “Secured by Norton” or “Paypal Verified”? Troy Hunt has an interesting article about what the logo really means.
- An MIT professor and RSA employee have invented “Honeywords” – fake passwords that trigger an alert if an attacker tries to break in with them. The concept is very interesting and is definitely better than utilizing Client Side Cryptography to protect users.
- The folks over at Cylance have a nice piece about how they hacked Google. Definitely worth a read to see how the big companies are just as vulnerable as the small ones.
- Think hacking is difficult? Just ask these 12 and 13 year old Alaskan school students.
- Nginx announced vulnerability CVE-2013-2028 on their mailing list. The vulnerability allows for remote code execution in several recent versions of Nginx. Time to update!
- Do you have Wifi on your smartphone? If so, you will definitely want to check out how Nordstrom is tracking you in their stores.
- Spotify recently patched a vulnerability where users could download free copies of songs they were listening to.
During lunch today, some of my esteemed colleagues brought up the point that we frequently transmit passwords in plain text between the client and the server. Sure, we use SSL to secure logins between the user’s browser or mobile app and the web service, but some of my coworkers were insistent that we hash passwords client side before sending them to the server. Their rationale was that if an attacker can submit a login request every millisecond, they can make 1,000 password guesses per second. However, if we were to add a a cryptographic hash function that takes 80 milliseconds to compute, we can greatly slow down the number of login attempts an attacker can make per second. They argued that we should use a hashing function that I had never heard of before to generate a hash of the user’s entered password and then send that to the server. Based on an average 80 milliseconds to calculate the hash, we have taken the number of login attempts a user can make in a second from 1,000 to 12.5. WOW! If this is such a great security measure, why aren’t sites like Facebook, Google, and Twitter using it today?
Wait a minute! These websites don’t implement client side cryptography. In fact, even if I use HTTPS I can still inspect the POST request and see my password sent to my online banking account in plain text.. The reason: It simply isn’t worth it. Contrary to popular belief, it doesn’t slow login attempts down because an attacker isn’t going to use a standard login form when brute forcing a password. Instead, the attacker will look through the client side code to find the hashing function and use it to generate his own list of pre-hashed passwords. Then, he submits requests to the server (bypassing the client) with his desired username and hashed password guess. The server has no way of knowing that the attacker isn’t going through the client side application and merrily validates the hashed password. Is it a bit more work for the attacker to generate a list of hashed passwords instead of a dictionary of words, yes. Is it enough to slow the attacker down, not really because chances are these lists (known as Rainbow Tables) already exist.
The obvious solution to a Rainbow Table is to salt passwords when hashing them. For those who don’t know, salting a password means that I add an extra string to your password prior to hashing it. This makes the hashes of a password different because the input is different. By definition, salting passwords will make them more difficult for an attacker to crack. This still holds true for client side hashing. The downside of client side hashing is that the salt becomes known by the attacker right away as it is sent to the client. Now, salts aren’t designed to be “secret” like passwords are. However, if an attacker knows what the salt used in a particular hash is, they can start building Rainbow Tables without having to compromise a database of passwords or code. Salts can’t change between user logins because then the hashed password sent to the server would be different and no longer valid. This means that an attacker can, with reasonable certainty, spend a couple months building a Rainbow Table using a dictionary list and salt before you even know that your site has been targeted. By the time the attacker starts brute forcing the login service, they have already generated their Rainbow Table and it is just a matter of time before they find one that works.
So, what is a reasonable level of security to use on the client? The answer is simple. Use SSL for all your connections and you will be doing similar things to online banking sites, Google, Facebook, and Twitter. Writing a function to hash passwords client side before sending them to the server is a waste of client resources, development time, and gives the user a false sense of security. Also, using some form of flood control is much more effective at limiting password guesses than hoping that a hashing algorithm takes a long time. Google addresses this by presenting repeated attempts with a Captcha and Drupal handles it by simply blocking the IP address of the person trying to log in. Both are effective ways to stop brute force login attempts. Using client side cryptography, is not an effective way.
Do you disagree? Leave me a comment explaining your position or Tweet me @checkthelog and I’d be glad to hear your opinions.
- Security Researchers found a couple of malware applications on the Google Play store. The apps have over 9 million downloads.
- Krebs on Security is reporting from multiple sources that online Tea retailer Teavana has been hacked. The attackers are said to have made off with credit and debit card information.
- Researchers have figured out that Android passwords don’t hold up well in cold environments.
- Excited about being able to wave your credit/debit card at a scanner rather than swipe it? You shouldn’t be.
- There has been an increase in compromised Apache servers over the past couple of weeks. The folks over at We Live Security have a nice writeup about it.
- The Associate Press Twitter account was hacked this week. According to multiple sources, an employee fell for a phishing attack. Coworkers, please repeat after me: “I will not click links in emails“.
- A number of phpMyAdmin vulnerabilities were announced this week.
- LivingSocial was hacked this week. According to reports, the attackers made off with 50 million names, email addresses, and password hashes.