Not Really a Blog

October 14, 2011

Keeping useful information with you

Filed under: Computers, Tips — Tags: , — jesus @ 12:21

Back in the day when I was studying at uni, I would carry with me little notes on anything that I would happen to be studying at the moment. Little summaries of what I was studying such that I didn’t have to open a book and read about it in case I just wanted to refresh my memory.

These days, with the overflow of information about anything compter-related I attack the problem in a similar way, although in a more techie way: I usually keep text files with important information about the matter. The idea is that I keep information that I know I’ll easily forget in those files, so that I can go back and refresh my memory without having to have a look at a book, search google, or ask a friend. Obviously I use text files because they are the most universal format, as they can be accessed and read with anything, so I don’t fall in the trap of having it on a format that is proprietary or difficult to read from other devices.

Now, I put these files in my Dropbox folder so that it’s spread across all my devices. Simply, effective and easy. Also, if you update it on the spot, it magically appears on all your other devices.

For example, now I’m trying to learn more about how to manage and work with Adobe Lightroom, and I find the amount of toggles, switches, options and procedures simply too much, something I’m pretty sure I’ll forget the moment I stop using it for more than 2 weeks. So while I’m learning I always keep these notes up to date.

This has helped me save time many times, so I hope that you find it useful too!

February 6, 2011

Using a password manager with Dropbox

Filed under: Computers, Internet — Tags: , — jesus @ 18:10

If you follow recommended practices you should have a strong and unique password on every single website (or service) you visit or use, so that access to the rest of the other services is limited if one of your password is guessed or captured in some way . While this all very well, it’s quite hard to do in practice.

We are either lazy and tend to repeat same passwords all over again in different websites or we just try to use variations of a few passwords so that we can keep them in our memory . I confess I have been using this method with the not so important websites that I use, reserving some strong password (and memorizing them) for some of the most important websites. I have even written about different ways in which you can have a strong password based on a pattern and some specific bits of information.

Up until a few months ago I was using this method but then I became more security concious and started using a password manager to store all my passwords, having updated the passwords on most of the websites I use. The way a password manager works is by storing all your passwords on an encrypted database file on disk so you can access all of them if you provide the master password. Thus, this master password needs to be strong enough. Now, this is all very well, but it’s useless if you keep your database at home and you are on the move or at work, etc.

So, what I’ve found useful is that keeping my encrypted file in a private folder on Dropbox works best, as that means that I can have that file everywhere on my computers giving me the flexibility of accessing it anywhere. You can even choose a password manager that works in all major operating systems so you are not limited by OS.

And before you tell me that this could be insecure if there are key loggers in action or any other kind of compromised system, yes, there’s a risk of handing in all your passwords. but, well, you need to find a compromise between being totally paranoid and keeping your passwords in a fire-proof safe and having all the websites sharing the same password :-). I’m a bit paranoid, so I don’t store really important passwords there, like my Gmail accounts, bank accounts, etc. Your mileage may vary, so use with caution.

If you have any other suggestions, please let me know as I’m interested in other ways in which you guys have solved this problem, if solved at all ;-)

July 6, 2010

Easy way to stop the annoying popups from snap.com

Filed under: Internet — Tags: — jesus @ 20:08

Get annoyed by them? Me too. A lot. And it seems you can’t disable them by using their interface. Or at least it doesn’t work for me.

Hot to fix this? Well, just like with any of these things, they usually load a javascript file. So, let’s just not load it:

First of all, edit /etc/hosts and add a line like 127.0.0.1 spa.snap.com:


sudo vi /etc/hosts

which should end like:


127.0.0.1 spa.snap.com

Better now!

February 6, 2010

The effect of temporary tables on MySQL’s replication

Filed under: System Administration — Tags: , — jesus @ 23:05

The other day I needed to set up a new set of MySQL instances at work what would replicate from an existing node. I was setting these up because the master node is running out of disk space and is very slow.

Usually, when you need to restore a database you do it in three parts:

  1. Install the binaries
  2. Load the initial data from the most recent MySQL backup.
  3. Set it replicating from one of the nodes by specifying the binary log file and position from which you want it to replicate (which usually corresponds to the day you took the backup).

Now, because we usually compress the binary logs and because the master didn’t have enough disk space to have all these binary logs uncompressed (such that the new slave could replicate by connecting to the master and talking the MySQL protocol), I needed to transfer them to the new slave and pipe them into MySQL from there. Seems simple, huh?

Everything went fine on point 1 and 2. But then, while piping the contents of the MySQL binary logs into the new databases, it all went wrong. What I used to pipe them was:


for file in master-bin* ; do echo "processing $file" ;    ../mysql/bin/mysqlbinlog "$file" | ../mysql/bin/mysql -u root -ppassword  ; done

Which is how you usually do these things, but this is what I got:


db@slave:~/binlogs$ for i in master-bin* ; do echo "processing $file" ;    ../mysql/bin/mysqlbinlog "$file" | ../mysql/bin/mysql -u root -ppassword  ; done
processing master-bin.1853
processing master-bin.1854
processing master-bin.1855
processing master-bin.1856
processing master-bin.1857
processing master-bin.1858
processing master-bin.1859
processing master-bin.1860
ERROR 1146 at line 10024: Table 'av.a2' doesn't exist
processing master-bin.1861
ERROR 1216 at line 1378: Cannot add or update a child row: a foreign key constraint fails
processing master-bin.1862
ERROR 1216 at line 22825: Cannot add or update a child row: a foreign key constraint fails
processing master-bin.1863

So, table av.a2 does not exist. WTF?

Investigating a bit about this table, it seems there’s a script which executes the following stuff on it everyday:


...

if test $ZZTEST -lt 300000; then
 echo "ERROR: Less than 300k"
 exit 1
fi
cat > sql << EOF
create temporary table a1 (mzi char(16) default null, key mzi(mzi));
create temporary table a2 (mzi char(16) default null, key mzi(mzi));
create temporary table a3 (mzi char(16) default null, key mzi(mzi));
EOF
cat v | grep ^447.........$ | awk '

...

Now, create temporary table, if you read about it on MySQL docs you’ll see that temporary tables are only visible to the current connection and are dropped automatically when that connection finishes. There are a few problems with replication and temporary tables, but this could not possibly be the same problem as these were the binary logs from the master. So, what’s going on here?

The problem here comes from the binary logs being rotated and the way I was inserting them. It just happened that the three SQL statements:


create temporary table a1 (mzi char(16) default null, key mzi(mzi));
create temporary table a2 (mzi char(16) default null, key mzi(mzi));
create temporary table a3 (mzi char(16) default null, key mzi(mzi));

were created at the end of binary log file master-bin.1859 and then there was a SQL statement which made it fail on file master-bin.1860 (inserting data into av.a2) because it was expecting those temporary tables to exist (and they didn’t). This happened  because we are using a for loop in bash to insert the binary logs, so there’s one mysql connection for each binary log file and thus, when file master-bin.1859 finished it automatically made MySQL drop the three temporary tables (that connection was finished) and then on the next connection (file master-bin.1860) these tables were missing.

There are a few ways in which you can work around this.

One approach is to get one big sql file and pipe that into MySQL, something like:


for file in master-bin.1* ; do echo "Using $file" ; ../mysql/bin/mysqlbinlog "$file" >> all.sql; date  ;  done
cat all.sql > ../mysql/bin/mysql -u root -ppassword

Alternatively, doing something like:

(for file in master-bin.1*; do echo “Using $file” 1>&2; ../mysql/bin/mysqlbinlog $file; date 1>&2; done) | ../mysql/bin/mysql -u root -ppassword

If you want to avoid creating one big fat file.

Which should work as in this case it’s only going to be one connection.

But, these ways have an obvious setback, which is that you cannot have a look at what failed (well, sort of, but extremely difficult) if something goes wrong; It’ll fail on one of the files and then will fail with the rest of them.

The better approach, as discussed on High Performance MySQL is to use a log server, that’s it, a MySQL server that would not use any storage but will only be used to replay binary logs, so you won’t have this problem and also, it will let you interact with the server and its diagnostic messages in case something goes awry.

Use temporary tables?

My advice here would be to encourage you not to use CREATE TEMPORARY TABLE because it can break replication in mysterious ways, but that could be too harsh. There are a few workarounds that can be done from an application level that you can read in http://www.xaprb.com/blog/2007/05/11/how-to-eliminate-temporary-tables-in-mysql/ which I think they could be worth thinking about.

Any experience with these problems?

November 25, 2009

Tired of remembering complicated passwords?

Filed under: Computers — Tags: , , , , — jesus @ 16:41

My colleague Mat showed me a nice little trick today. Say you are bored or tired of using different password in all your websites you use, or maybe you don’t want to use the same password on these sites. Simple and easy if you use a Linux computer. You only have to use, and more importantly remember, simple words or combination and a little command in the shell to generate a really difficult to guess password.

Say you want to access your Google account and you want to use google as the password. So I’ll go and use the SHA hash algorithm to generate an interesting password like:


% echo "google" | sha1sum
d662cc72cfed7244a88a7360add85d5627b9cd6c  -

Or just go and use your hash algorithm of choice. And if it’s too long for the website you want to use, get the 10 first characters or something.


% echo "google" | sha1sum | cut -c1-10
d662cc72cf

And that’s it, copy and paste and you are done. Yes, it’s a bit of a faff, but, well, you want something secure, don’t you?

Simply brilliant. Thanks Mat.

November 17, 2009

XFS and barriers

Filed under: Linux, System Administration — Tags: , , — jesus @ 09:33

Lately at work, we’ve been trying to figure out what the deal with barriers are, either for XFS or EXT3, the two filesystems we like most. If you don’t know what barriers are, go and read a bit on the XFS FAQ. Short story, XFS comes with barriers enabled by default, EXT3 does not. Barriers make your system a lot more secure to data corruption, but it degrades performance a lot. Give that EXT3 does not do checksumming of the journal, you could also have lots of corruptions if it’s not enabled. Go and read on wikipedia itself.

If you google a bit you’ll see that there are lots of people who are talking about it, but definitely I haven’t found and answer to what is best and under which scenarios. So, starting with a little test on XFS, here are the results, totally arbitrary on a personal system of mine. System is an Intel Core 2 Duo CPU e4500 at 2.2 GHz, 2GB of RAM and 500GB of HD in a single one XFS partition. I’m testing it with bonnie++ and here are the results. First, mounting by default (that’s it, barriers enabled)

# time bonnie -d /tmp/bonnie/ -u root -f -n 1024
Using uid:0, gid:0.
Writing intelligently...done
Rewriting...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
kore             4G           54162   9 25726   7           60158   4 234.2   4
Latency                        5971ms    1431ms               233ms     251ms
Version  1.96       ------Sequential Create------ --------Random Create--------
kore                -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
 files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
 1024   260  11 489116  99   299   1   262  11 92672  56   165   0
Latency               411ms    3435us     595ms    2063ms     688ms   23969ms

real    303m50.878s
user    0m6.036s
sys    17m52.591s
#

Second time, after a few hours, doing a

mount -oremount,rw,nobarrier /

we get these results (barriers not enabled):

# date ;time bonnie -d /tmp/bonnie/ -u root -f -n 1024 ; date
Tue Nov 17 00:43:53 GMT 2009
Using uid:0, gid:0.
Writing intelligently...done
Rewriting...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
kore             4G           66059  12 30185  10           71108   5 238.6   3
Latency                        4480ms    3911ms               171ms     250ms
Version  1.96       ------Sequential Create------ --------Random Create--------
kore                -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
1024  1830  85 490304  99  4234  17  3420  24 124090  78   402   1
Latency               434ms     165us     432ms    1790ms     311ms   26826ms

real    67m21.588s
user    0m5.668s
sys     11m30.123s
Tue Nov 17 01:51:15 GMT 2009

So, I think you can actually tell how different they behave. I haven’t created graphs from these results to show them graphically, but let me show you some monitoring graphs from these two experiments. The first test was run yesterday in the afternoon. The second one was run just after midnight. You can see the difference. Here showing you the CPU and load average graphs.

I’ll try to follow up shortly with more findings, if I find any ;-). Please, feel free to add any suggestion or comments about your own experiences with these problems.

CPU Usage

CPU Usage

Load Average

Load Average

November 16, 2009

Nice Firefox and Thunderbird themes

Filed under: Internet — Tags: , , , — jesus @ 09:12

I’ve found two themes for firefox and thunderbird that I’m so pleased with them I have to promote them a bit :-). They are Charamel and Silvermel created by Kurt Freudenthal. I discovered them thanks to Chewie. So, what I like about them is:

  • Works with newer versions of Mozilla firefox and thunderbird (Including version 3 beta 4)
  • It’s got really nice colours and  a nice layout.
  • It works fine under linux and Mac OS X. In the latter it lets you have small icons in the bookmark bar, whereas the default theme does not allow you to do that. See image to see what I mean.
  • You’ve got two different colours to choose from.

So go and buy the guy a beer so he can work more on them :-).

November 15, 2009

Monitoring data with Collectd

Filed under: System Administration — Tags: — jesus @ 18:50

I’ve been using collectd for quite a while just to monitor the performance of my workstation. I’ve tried other solutions (cacti, munin, etc) but I didn’t like how it all worked or the graphs it created, the amount of work required to have it working, or any other reason, finding collectd to be overall a good solution for my monitoring needs (which are basically graphing and getting some alerts). I like it because it generates nice and good graphs (among other things):

But what I like the most about it is the architecture it’s got for sending data and its low memory footprint.

Today I’ve been playing with it to use the network plugin and I quite like it. The network plugin allows you to have clients sent the monitoring results to a central server, just like the picture below. It sends the data using UDP, which then is captured by the server which will store the data in rrd files and thus having all the data centrally stored. This way, it guarantees that the server it’s not going to block on sending this data. Obviously we want to make sure our connection is reliable.

Collectd Architecture

Which means that you can have collectd running on a number of computers and having them sending the data to a server which can be used to store all this data and display it. The magic about it is that the memory footprint is very small (it’s written in C) and that it can send the data to a single server or more than one, even sending them using a multicast group, which is very nice.

Things that it, allegedly, doesn’t do very well is monitoring and generating alerts (but last version it claims it can have simple thresholds). Also, the web interface collection3 written in perl is a major liability.

So, I’m planning on spending a few more hours playing with this and possibly coming up with an article on how to set it up integrated with my systems and trac. Such that I :

  1. Have a plugin to display graphs on a wiki page, on trac possibly.
  2. Have it sending the data via openvpn (although the latest version supports encryption and signing) for clients behind a firewall.
  3. Make the most use of the plugins.

Any suggestions for a better web interface for collectd?

October 17, 2009

Reload a page in Safari. How difficult it can be?

Filed under: ¿Pero qué coño?, Mac — Tags: , — jesus @ 02:32

I have been really annoyed for a while trying to find the reload button in Safari 4 under Mac. I just didn’t find it, but it seems I was completely blind and they’ve placed it in a place where you wouldn’t expect it. Just the right hand side of the address bar. What genius though it would be a good idea to change from where everyone assumes it is?

Yes, I know Cmd-R. It’s just that… Arg…

October 15, 2009

Little surprises in HTTP Headers

Filed under: Internet, System Administration — Tags: — jesus @ 22:59

Last week I move a blog I’ve got in Spanish to wordpress.com. Basically I really like wordpress.com and I believe it’s really worth it in terms of freeing my time from administering a wordpress installation and keeping up with the security fixes etc. And today, having a little bit of time I was tweaking my old website to redirect to the new site using an HTTP permanent redirect header. This is what I found in the HTTP headers:

[golan@mars ~] % HEAD http://roncero.org/blog/
200 OK
Cache-Control: max-age=260, must-revalidate
Connection: close
Date: Thu, 15 Oct 2009 21:35:09 GMT
Server: nginx
Vary: Cookie
Content-Type: text/html; charset=UTF-8
Last-Modified: Thu, 15 Oct 2009 21:34:29 +0000
Client-Date: Thu, 15 Oct 2009 21:35:09 GMT
Client-Peer: 76.74.254.123:80
Client-Response-Num: 1
Link: ; rel=shortlink
X-Hacker: If you're reading this, you should visit automattic.com/jobs and apply to join the fun, mention this header.
X-Nananana: Batcache
X-Pingback: http://blog.roncero.org/xmlrpc.php

So, apart from various bits of information (nginx), what I really really liked was the X-Hacker header :-). Fancy a job?

Older Posts »

Customized Shocking Blue Green Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 2,886 other followers