Not Really a Blog

November 25, 2009

Tired of remembering complicated passwords?

Filed under: Computers — Tags: , , , , — jesus @ 16:41

My colleague Mat showed me a nice little trick today. Say you are bored or tired of using different password in all your websites you use, or maybe you don’t want to use the same password on these sites. Simple and easy if you use a Linux computer. You only have to use, and more importantly remember, simple words or combination and a little command in the shell to generate a really difficult to guess password.

Say you want to access your Google account and you want to use google as the password. So I’ll go and use the SHA hash algorithm to generate an interesting password like:


% echo "google" | sha1sum
d662cc72cfed7244a88a7360add85d5627b9cd6c  -

Or just go and use your hash algorithm of choice. And if it’s too long for the website you want to use, get the 10 first characters or something.


% echo "google" | sha1sum | cut -c1-10
d662cc72cf

And that’s it, copy and paste and you are done. Yes, it’s a bit of a faff, but, well, you want something secure, don’t you?

Simply brilliant. Thanks Mat.

November 17, 2009

XFS and barriers

Filed under: Linux, System Administration — Tags: , , — jesus @ 09:33

Lately at work, we’ve been trying to figure out what the deal with barriers are, either for XFS or EXT3, the two filesystems we like most. If you don’t know what barriers are, go and read a bit on the XFS FAQ. Short story, XFS comes with barriers enabled by default, EXT3 does not. Barriers make your system a lot more secure to data corruption, but it degrades performance a lot. Give that EXT3 does not do checksumming of the journal, you could also have lots of corruptions if it’s not enabled. Go and read on wikipedia itself.

If you google a bit you’ll see that there are lots of people who are talking about it, but definitely I haven’t found and answer to what is best and under which scenarios. So, starting with a little test on XFS, here are the results, totally arbitrary on a personal system of mine. System is an Intel Core 2 Duo CPU e4500 at 2.2 GHz, 2GB of RAM and 500GB of HD in a single one XFS partition. I’m testing it with bonnie++ and here are the results. First, mounting by default (that’s it, barriers enabled)

# time bonnie -d /tmp/bonnie/ -u root -f -n 1024
Using uid:0, gid:0.
Writing intelligently...done
Rewriting...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
kore             4G           54162   9 25726   7           60158   4 234.2   4
Latency                        5971ms    1431ms               233ms     251ms
Version  1.96       ------Sequential Create------ --------Random Create--------
kore                -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
 files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
 1024   260  11 489116  99   299   1   262  11 92672  56   165   0
Latency               411ms    3435us     595ms    2063ms     688ms   23969ms

real    303m50.878s
user    0m6.036s
sys    17m52.591s
#

Second time, after a few hours, doing a

mount -oremount,rw,nobarrier /

we get these results (barriers not enabled):

# date ;time bonnie -d /tmp/bonnie/ -u root -f -n 1024 ; date
Tue Nov 17 00:43:53 GMT 2009
Using uid:0, gid:0.
Writing intelligently...done
Rewriting...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
kore             4G           66059  12 30185  10           71108   5 238.6   3
Latency                        4480ms    3911ms               171ms     250ms
Version  1.96       ------Sequential Create------ --------Random Create--------
kore                -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
1024  1830  85 490304  99  4234  17  3420  24 124090  78   402   1
Latency               434ms     165us     432ms    1790ms     311ms   26826ms

real    67m21.588s
user    0m5.668s
sys     11m30.123s
Tue Nov 17 01:51:15 GMT 2009

So, I think you can actually tell how different they behave. I haven’t created graphs from these results to show them graphically, but let me show you some monitoring graphs from these two experiments. The first test was run yesterday in the afternoon. The second one was run just after midnight. You can see the difference. Here showing you the CPU and load average graphs.

I’ll try to follow up shortly with more findings, if I find any ;-). Please, feel free to add any suggestion or comments about your own experiences with these problems.

CPU Usage

CPU Usage

Load Average

Load Average

November 16, 2009

Nice Firefox and Thunderbird themes

Filed under: Internet — Tags: , , , — jesus @ 09:12

I’ve found two themes for firefox and thunderbird that I’m so pleased with them I have to promote them a bit :-). They are Charamel and Silvermel created by Kurt Freudenthal. I discovered them thanks to Chewie. So, what I like about them is:

  • Works with newer versions of Mozilla firefox and thunderbird (Including version 3 beta 4)
  • It’s got really nice colours and  a nice layout.
  • It works fine under linux and Mac OS X. In the latter it lets you have small icons in the bookmark bar, whereas the default theme does not allow you to do that. See image to see what I mean.
  • You’ve got two different colours to choose from.

So go and buy the guy a beer so he can work more on them :-).

November 15, 2009

Monitoring data with Collectd

Filed under: System Administration — Tags: — jesus @ 18:50

I’ve been using collectd for quite a while just to monitor the performance of my workstation. I’ve tried other solutions (cacti, munin, etc) but I didn’t like how it all worked or the graphs it created, the amount of work required to have it working, or any other reason, finding collectd to be overall a good solution for my monitoring needs (which are basically graphing and getting some alerts). I like it because it generates nice and good graphs (among other things):

But what I like the most about it is the architecture it’s got for sending data and its low memory footprint.

Today I’ve been playing with it to use the network plugin and I quite like it. The network plugin allows you to have clients sent the monitoring results to a central server, just like the picture below. It sends the data using UDP, which then is captured by the server which will store the data in rrd files and thus having all the data centrally stored. This way, it guarantees that the server it’s not going to block on sending this data. Obviously we want to make sure our connection is reliable.

Collectd Architecture

Which means that you can have collectd running on a number of computers and having them sending the data to a server which can be used to store all this data and display it. The magic about it is that the memory footprint is very small (it’s written in C) and that it can send the data to a single server or more than one, even sending them using a multicast group, which is very nice.

Things that it, allegedly, doesn’t do very well is monitoring and generating alerts (but last version it claims it can have simple thresholds). Also, the web interface collection3 written in perl is a major liability.

So, I’m planning on spending a few more hours playing with this and possibly coming up with an article on how to set it up integrated with my systems and trac. Such that I :

  1. Have a plugin to display graphs on a wiki page, on trac possibly.
  2. Have it sending the data via openvpn (although the latest version supports encryption and signing) for clients behind a firewall.
  3. Make the most use of the plugins.

Any suggestions for a better web interface for collectd?

The Shocking Blue Green Theme. Create a free website or blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 2,872 other followers