How to upgrade Spacewalk database schema
After installing updates for spacewalk you may see a message like this on the home screen.
A schema upgrade is required. Please upgrade your schema at your earliest convenience to receive latest bug fixes and avoid potential problems.To fix this run the following commands.
spacewalk-service stop spacewalk-schema-upgrade spacewalk-service start
Another year, another turkey. :D
,+*^^*+___+++_ ,*^^^^ ) _+* ^**+_ +^ _ _++*+_+++_, ) _+^^*+_ ( ,+*^ ^ \+_ ) { ) ( ,( ,_+--+--, ^) ^ { (@) } f ,( ,+-^ __*_*_ ^^\_ ^\ ) {:;-/ (_+*-+^^^^^+*+*<_ _++_)_ ) ) / ( / ( ( ,___ ^*+_+* ) < < U _/ ) *--< ) ^\-----++__) ) ) ) ( ) _(^)^^)) ) )\^^^^^))^*+/ / / ( / (_))_^)) ) ) ))^^^^^))^^^)__/ +^^ ( ,/ (^))^)) ) ) ))^^^^^^^))^^) _) *+__+* (_))^) ) ) ))^^^^^^))^^^^^)____*^ \ \_)^)_)) ))^^^^^^^^^^))^^^^) (_ ^\__^^^^^^^^^^^^))^^^^^^^) ^\___ ^\__^^^^^^))^^^^^^^^) ^^^^^\uuu/^^\uuu/^^^^\^\^\^\^\^\^\^ ___) >____) >___ ^\_\_\_\_\_\_\) ^^^//\_^^//\_^ ^(\_\_\_\) ^^^ ^^ ^^^ ^^ # # ## ##### ##### # # # # # # # # # # # # ###### # # # # # # # # # ###### ##### ##### # # # # # # # # # # # # # # # ##### # # ## # # # # #### # # # # # ## # # # # # ###### # # # # # #### #### # # # ###### # # # # # # # # # # # # ## # # # # # # # # # # # # # #### #### # # # # # # #### # # # # # # ## # # # # # # # # # # # # # ### # # # # # # # # ### # # # # # # # ## # # #### # ## # # # ####
Source code available on github.
How to fix graphite user creation in FreeBSD 10
If you see an error like below when you try to create a graphite user you will need to update the graphite database in order for user create to work.
django.db.utils.IntegrityError: NOT NULL constraint failed: auth_user.last_login
Unfortunately sqlite doesn't support the drop constraint syntax so you will need to create a new temporary table, copy over the existing table, and then rename.
CREATE TABLE "auth_user2" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "password" varchar(128) NOT NULL, "last_login" datetime NULL, "is_superuser" bool NOT NULL, "username" varchar(30) NOT NULL UNIQUE, "first_name" varchar(30) NOT NULL, "last_name" varchar(30) NOT NULL, "email" varchar(75) NOT NULL, "is_staff" bool NOT NULL, "is_active" bool NOT NULL, "date_joined" datetime NOT NULL); insert into auth_user2 select * from auth_user ; drop table auth_user ; alter table auth_user2 rename to auth_user ;
Now user creation should work.
root@graphite:/usr/local/lib/python2.7/site-packages/graphite # python manage.py createsuperuser Username (leave blank to use 'root'): wattersm Email address: wattersm@watters.ws Password: Password (again): Superuser created successfully.
Selecting random records in postgresql
In an attempt to make my site more responsive I have been working on optimizing the SQL code used on the backend, this includes the random quote generator that I have set up on the main page.
The old code used a query similar to below.
SELECT quote, name FROM quotes ORDER BY RANDOM() LIMIT 1
This works fine if you have small tables and fast disks but consider the issue when there is a table with millions of rows. To find *one* record the server must read through the table, sort the records, and then discard every result but one. This operation is slow and inefficient.
To improve performance you can reduce the number of rows read by using a primary key on the table. Each row has a unique ID number which can then be used as the limit for the random() function. For example, the following query will select a random record based on the last index number created:
SELECT quote_text, name FROM quotes WHERE quote_id = (SELECT floor(random() * (SELECT last_value from quotes_quote_id_seq)+1)) ;
This query is not perfect and may result in empty results which your code will need to accommodate for but it is still more efficient than reading the entire table every time the page is loaded.
Prevent iptables from spamming your console
How to disable firewall "spam" on your console.
I worked on a ticket recently for a customer concerned about firewall messages being sent to every user's console by the kernel. After doing a bit of research I discovered that the nf_ct_ftp module logs messages to syslog as *emergency* level by default which results in every console being spammed by firewall messages. To prevent this you can make a few simple changes as follows.
First, set up a custom rsyslog conf file to send iptables messages to a different file.
cat << EOF > /etc/rsyslog.d/iptables.conf :msg, contains, "nf_ct_ftp:" -/var/log/iptables.log & ~ EOF
The first line means send all messages that contain the nf_ct_ftp: string to /var/log/iptables.log. The second line causes rsyslog to discard messages that were matched on the previous line. Adjust this rule according to your needs.
Second, update sysctl.conf with the following lines and then run "sysctl -p".
kernel.printk = 4 4 1 7 sysctl -p
See https://www.kernel.org/doc/Documentation/sysctl/kernel.txt for a description of these values.
Now restart rsyslog and test your changes using the "logger" command.
service rsyslog restart logger -p kern.emerg -t kernel "nf_ct_ftp: dropping packet test"
You should not see anything on the console. cat /var/log/iptables.log to confirm that the entry was logged properly. After you have confirmed that the messages are being logged properly you can set up logrotate to manage the logs. Create a config file to do this similar to below.
cat << EOF > /etc/logrotate.d/iptables /var/log/iptables.log { rotate 7 daily missingok notifempty delaycompress compress postrotate invoke-rc.d rsyslog rotate > /dev/null endscript } EOF
There is nothing else to do at this point.
How to Manually Change Domain in Magento
Changing the domain name on a Magento install requires a few steps to update the site URL in mysql. The procedure should be similar to below.
Update your core_config_data table to edit the two records for web/unsecure/base_url and web/secure/base_url
mysqlmysql> update core_config_data set value = 'http://dev.example.com/' where path = 'web/unsecure/base_url'; Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> update core_config_data set value = 'http://dev.example.com/' where path = 'web/secure/base_url'; Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0
After this is done delete the contents of WEBROOT/var/cache. The location of the WEBROOT varies depending on how your server is set up.
cd /home/username/public_html/var/ rm -rf ./cache/*
Update any .htaccess redirects you may have added.
That's it, you're done! Open the site in a new browser tab to make sure that everything loads properly.
How to fix "Your profile could not be loaded" error in Google Chrome
If you get an error from chrome stating that your profile could not be loaded properly here is the PROPER way to fix the issue. Unfortunately googling for this error leads to a lot of false information and speculation.
First, go to your profile's data directory. In Linux this would be ~/.config/google-chrome/Default.
Now check for any processes that have the Web Data file open.
lsof Web\ Data
Kill those processes.
Next run an integrity check on the database.
sqlite3 Web\ Data "pragma integrity_check"
This should repair any errors in the file. After that is done start up chrome.
Here is a simple canola/soybean oil soap recipe. Yield is approximately 8 lbs of soap. Lye amount is based on a saponification value of 0.130. See http://www.millersoap.com/soapdesign.html#SAP Tables for more details.
Canola/Soybean Soap
Ingredients:- 70 oz canola oil
- 16 oz soybean oil
- 28 oz water
- 11 oz lye
- Scent - optional
- Coloring - Use dye or a small piece of crayon.
- Dissolve lye into water. Prepare this mixture in advance, the water will take time to cool.
- Pour oil into large pot, heat to 130 degrees.
- Stir lye mixture into the oil.
- Blend with a stick blender until you see signs of tracing.
- Pour into molds and let harden.
Raw soap will take 24-48 hours to harden, after that the soap can be removed from the mold to cure. Allow 30 days cure time.
Manually create a Wordpress admin user from the mysql command line
If you need admin access to a wordpress install you can easily create a new admin user by running a few SQL commands on the database. This has been tested and verified to work on Wordpress 3.5.
To do this you will first need to identify what database the site is actually using. Check wp-config.php for the database name and mysql host info. Once you have that connect to mysql and run the following statements.
INSERT INTO wp_users (user_login,user_pass,user_email,user_registered,user_status) VALUES("user_name",md5('password'),"username@example.com",NOW(),0);
Find user ID from wp_users table:
SET @user_id = (SELECT ID FROM wp_users where user_login = 'user_name'); INSERT INTO wp_usermeta (user_id,meta_key,meta_value) VALUES (@user_id,"wp_user_level","10"); INSERT INTO wp_usermeta (user_id,meta_key,meta_value) VALUES (@user_id,"wp_capabilities",'a:1:{s:13:"administrator";s:1:"1";}');
After reading about various cluster file systems I decided to set up a small cluster running Lustre using Storm VPS instances. All nodes have the same hardware configuration and use a 50 GB SAN volume connected through iSCSI as the lustre block device. Specs are as follows.
Node configuration: OS: CentOS 6.3 x86_64 Kernel: 2.6.32-279.19.1.el6_lustre.x86_64 RAM: 3556 MB (Storm 4 GB) Primary Disk: 300 GB virtual disk Secondary Disk (iscsi): 50 GB SAN volume CPU: Two Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz cores Lustre configuration: 1 management server, 1 metadata server, 1 object storage server. LNET was configured to use a private network interface.
Disk performance was tested with the sgpdd_survey script from the Lustre IOkit. Write speed appears to average around 35-40 MB/s.
Wed Apr 10 10:29:39 EDT 2013 sgpdd-survey on /dev/sda from oss1.watters.ws total_size 8388608K rsz 1024 crg 1 thr 1 write 49.32 MB/s 1 x 49.32 = 49.32 MB/s read 68.15 MB/s 1 x 68.15 = 68.15 MB/s total_size 8388608K rsz 1024 crg 1 thr 2 write 77.15 MB/s 1 x 77.15 = 77.15 MB/s read 92.85 MB/s 1 x 92.85 = 92.85 MB/s total_size 8388608K rsz 1024 crg 1 thr 8 write 36.15 MB/s 1 x 36.14 = 36.14 MB/s read 94.08 MB/s 1 x 94.09 = 94.09 MB/s total_size 8388608K rsz 1024 crg 1 thr 16 write 35.84 MB/s 1 x 35.85 = 35.85 MB/s read 101.59 MB/s 1 x 101.59 = 101.59 MB/s total_size 8388608K rsz 1024 crg 2 thr 2 write 35.34 MB/s 2 x 17.67 = 35.34 MB/s read 67.38 MB/s 2 x 33.69 = 67.39 MB/s total_size 8388608K rsz 1024 crg 2 thr 4 write 39.09 MB/s 2 x 19.55 = 39.10 MB/s read 79.20 MB/s 2 x 39.60 = 79.19 MB/s total_size 8388608K rsz 1024 crg 2 thr 8 write 40.40 MB/s 2 x 20.20 = 40.40 MB/s read 98.16 MB/s 2 x 49.09 = 98.17 MB/s total_size 8388608K rsz 1024 crg 2 thr 16 write 37.73 MB/s 2 x 18.86 = 37.73 MB/s read 99.31 MB/s 2 x 49.66 = 99.32 MB/s total_size 8388608K rsz 1024 crg 2 thr 32 write 38.08 MB/s 2 x 19.04 = 38.07 MB/s read 97.30 MB/s 2 x 48.66 = 97.31 MB/s total_size 8388608K rsz 1024 crg 4 thr 4 write 38.38 MB/s 4 x 9.59 = 38.38 MB/s read 98.17 MB/s 4 x 24.55 = 98.19 MB/s total_size 8388608K rsz 1024 crg 4 thr 8 write 38.25 MB/s 4 x 9.57 = 38.26 MB/s read 100.06 MB/s 4 x 25.01 = 100.06 MB/s total_size 8388608K rsz 1024 crg 4 thr 16 write 39.42 MB/s 4 x 9.85 = 39.41 MB/s read 99.96 MB/s 4 x 25.00 = 99.98 MB/s total_size 8388608K rsz 1024 crg 4 thr 32 write 39.43 MB/s 4 x 9.86 = 39.44 MB/s read 99.93 MB/s 4 x 24.99 = 99.95 MB/s total_size 8388608K rsz 1024 crg 4 thr 64 write 38.22 MB/s 4 x 9.56 = 38.22 MB/s read 97.80 MB/s 4 x 24.45 = 97.81 MB/s total_size 8388608K rsz 1024 crg 8 thr 8 write 38.73 MB/s 8 x 4.84 = 38.76 MB/s read 87.71 MB/s 8 x 10.97 = 87.74 MB/s total_size 8388608K rsz 1024 crg 8 thr 16 write 39.70 MB/s 8 x 4.96 = 39.67 MB/s read 81.09 MB/s 8 x 10.14 = 81.10 MB/s total_size 8388608K rsz 1024 crg 8 thr 32 write 43.40 MB/s 8 x 5.43 = 43.41 MB/s read 81.21 MB/s 8 x 10.16 = 81.25 MB/s total_size 8388608K rsz 1024 crg 8 thr 64 write 38.88 MB/s 8 x 4.86 = 38.91 MB/s read 67.10 MB/s 8 x 8.39 = 67.14 MB/s total_size 8388608K rsz 1024 crg 8 thr 128 write 42.19 MB/s 8 x 5.27 = 42.19 MB/s read 65.92 MB/s 8 x 8.24 = 65.92 MB/s
IOPS performance was tested using iozone, here are the results.
OPS Mode. Output is in operations per second. Include fsync in write timing No retest option selected Record Size 4 KB File size set to 4194304 KB Command line used: iozone -l 32 -O -i 0 -i 1 -i 2 -e -+n -r 4K -s 4G Time Resolution = 0.000001 seconds. Processor cache size set to 1024 Kbytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. Min process = 32 Max process = 32 Throughput test with 32 processes Each process writes a 4194304 Kbyte file in 4 Kbyte records Children see throughput for 32 initial writers = 27764.87 ops/sec Parent sees throughput for 32 initial writers = 26692.16 ops/sec Min throughput per process = 840.07 ops/sec Max throughput per process = 903.35 ops/sec Avg throughput per process = 867.65 ops/sec Min xfer = 975918.00 ops Children see throughput for 32 readers = 26758.37 ops/sec Parent sees throughput for 32 readers = 26755.12 ops/sec Min throughput per process = 448.79 ops/sec Max throughput per process = 1372.74 ops/sec Avg throughput per process = 836.20 ops/sec Min xfer = 342845.00 ops
As you can see lustre is a relatively high performance file system and is easily scalable to store petabytes of data. Adding more space is as simple as building a new object server and running mkfs.lustre.
create an rpm mirror using wget
If you want to set up a yum repo you can easily mirror an existing site using wget. To do this you will need to run this command.
wget --mirror -np --no-host-directories -A rpm,srpm http://downloads.whamcloud.com/public/lustre/latest-maintenance-release/
In this case we are mirroring the lustre rpm repo.
After the files are downloaded you can run the createrepo command to create yum metadata.
quickly remove old ssh keys with sed
If you work on a lot of servers and do a lot of reinstalls you will see the following error often.
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that the RSA host key has just been changed. The fingerprint for the RSA key sent by the remote host is a8:6a:60:5a:48:64:ac:90:33:b9:f2:7c:be:56:92:81. Please contact your system administrator. Add correct host key in /var/root/.ssh/known_hosts to get rid of this message. Offending key in /root/.ssh/known_hosts:9948 RSA host key for host.example.com has changed and you have requested strict checking. Host key verification failed.
To save some time you can quickly remove the old host key with a single sed command:
sed -i '9948d' .ssh/known_hosts
Building the wl module on linux 3.2
After upgrading my netbook kernel to the latest stable version available on backports.org I soon discovered that my wireless interface no longer worked. Trying to rebuild the module resulted in the following error:
/usr/src/modules/broadcom-sta/amd64/src/wl/sys/wl_linux.c: In function _wl_set_multicast_list: /usr/src/modules/broadcom-sta/amd64/src/wl/sys/wl_linux.c:1435: error: struct net_device has no member named mc_list /usr/src/modules/broadcom-sta/amd64/src/wl/sys/wl_linux.c:1435: error: struct net_device has no member named mc_count /usr/src/modules/broadcom-sta/amd64/src/wl/sys/wl_linux.c:1436: error: dereferencing pointer to incomplete type /usr/src/modules/broadcom-sta/amd64/src/wl/sys/wl_linux.c:1442: error: dereferencing pointer to incomplete type make[4]: *** [/usr/src/modules/broadcom-sta/amd64/src/wl/sys/wl_linux.o] Error 1 make[3]: *** [_module_/usr/src/modules/broadcom-sta/amd64] Error 2 make[2]: *** [sub-make] Error 2 make[1]: *** [all] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.2.0-0.bpo.2-amd64' make: *** [all] Error 2 root@netbook:/usr/src/modules/broadcom-sta/amd64# run "make API=WEXT" bash: run: command not found root@netbook:/usr/src/modules/broadcom-sta/amd64# "make API=WEXT" bash: make API=WEXT: command not found root@netbook:/usr/src/modules/broadcom-sta/amd64# make API=WEXT KBUILD_NOPEDANTIC=1 make -C /lib/modules/`uname -r`/build M=`pwd` make[1]: Entering directory `/usr/src/linux-headers-3.2.0-0.bpo.2-amd64' CC [M] /usr/src/modules/broadcom-sta/amd64/src/wl/sys/wl_linux.o /usr/src/modules/broadcom-sta/amd64/src/wl/sys/wl_linux.c: In function _wl_set_multicast_list: /usr/src/modules/broadcom-sta/amd64/src/wl/sys/wl_linux.c:1435: error: struct net_device has no member named mc_list /usr/src/modules/broadcom-sta/amd64/src/wl/sys/wl_linux.c:1435: error: struct net_device has no member named mc_count /usr/src/modules/broadcom-sta/amd64/src/wl/sys/wl_linux.c:1436: error: dereferencing pointer to incomplete type /usr/src/modules/broadcom-sta/amd64/src/wl/sys/wl_linux.c:1442: error: dereferencing pointer to incomplete type make[4]: *** [/usr/src/modules/broadcom-sta/amd64/src/wl/sys/wl_linux.o] Error 1 make[3]: *** [_module_/usr/src/modules/broadcom-sta/amd64] Error 2 make[2]: *** [sub-make] Error 2 make[1]: *** [all] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.2.0-0.bpo.2-amd64' make: *** [all] Error 2
A bit of googling lead me to a few patches that helped solve the issue. Here is a unified diff of my changes which should allow you to cleanly build and install the wl module using module-assistant.
http://www.watters.ws/broadcom_bcm4313_linux3.2.patch
One thing to note is that the source code needs to be patched BEFORE you run m-a, i.e. cd to /usr/src/modules/broadcom_sta/amd64/src/wl/sys and run patch the patch from there.
I hope that somebody will find this useful.
If you're sick of the update notifier bugging you on your Ubuntu desktop you can easily set up a cron job to automatically take care of things.
sudo crontab -e 0 5 * * * apt-get -y upgrade
Change the time to whenever you want.
Directory '/var/run/screen' must have mode 777 fix
Directory '/var/run/screen' must have mode 777.
This is a fairly common error I've been seeing lately and the solution is quite simple.
chmod g+s /usr/bin/screen
If you're a bash user like me and login to A LOT of servers every day, it helps to have a visible notation of what server you're actually on. Add this to your .bashrc file and source it.
# set prompt PS1="[\u@`hostname`] \W > " PS2=">"
There's a lot more you can do like adding a clock, the history number, etc. but I prefer to keep it simple.
Xen with File Server Replication
I've been working on a project at work that has kept me pretty busy this week, it involves shared storage and computing clusters which has me pretty geeked out. I must say that I've learned A LOT about Solaris clustering, iSCSI, and disk replication, throw ZFS with Xen on top of that and things get pretty complicated.
Here's a diagram of the current system I have built.

With this setup the file server has ZFS pools that replicate each disk over to the secondary, the concept is the same as a local disk mirror. I've tested out a few different fail over situations which have worked so far, the one wrench in the works is that Linux doesn't like having iSCSI targets moved around while the device is open. This means that the xen server must shut down all running domains, take the volume offline, and then restart everything. Naturally this is not desirable in production, I will be testing out a Solaris server running xVM to see how that handles moving iSCSI targets later this week.
Virtualization is a big trend in computing right now and Solaris offers some very nice options of its own. One of these features is zones and branded zones which allow non-native operating systems to be installed into a container, this is similar to other technologies like OpenVZ and linux-vserver but zones add the power of ZFS as well.
I started reading the excellent article on Blastwave about setting up zones in Solaris 10 and within an hour I had everything finished with a Linux branded zone running CentOS 3.9. Here's a quick run down on how to accomplish this.
First create a file system to contain your zones:
zfs create -o mountpoint=/zone rpool/zone
After this is done you need to create the zone and install it, these are two separate processes.
zonecfg -z lx-zone lx-zone: No such zone configured Use 'create' to begin configuring a new zone. zonecfg:zone1> create zonecfg:zone1> set zonepath=/zone/1 zonecfg:zone1> set autoboot=true zonecfg:zone1> set brand=lx zonecfg:zone1> add net zonecfg:zone1:net> set address=192.168.35.210/24 zonecfg:zone1:net> set physical=hme1 zonecfg:zone1:net> end zonecfg:zone1> verify zonecfg:zone1> commit zonecfg:zone1> ^D
For the install you will need the iso images or a tar ball of a file system, you also need to create a new distro file as Solaris only goes up to CentOS 3.8 right now.
wget http://mirrors.example.com/CentOS/3.9/isos/i386/CentOS-3.9-i386-bin1of3.iso wget http://mirrors.example.com/CentOS/3.9/isos/i386/CentOS-3.9-i386-bin2of3.iso wget http://mirrors.example.com/CentOS/3.9/isos/i386/CentOS-3.9-i386-bin3of3.iso cd /usr/lib/brand/lx/distros/ cp centos38.distro centos39.distro
Edit this file and change the serial to "1183469235.99" and the version to "3.9"
Now install the OS
zoneadm -z lx-zone install -d /export/centos_3.9/ core
Check the results:
bash-2.05b# zoneadm list -vc ID NAME STATUS PATH 0 global running / - lx-zone installed /zone/1
The STATUS is now "installed".
Boot the environment:
bash-2.05b# zoneadm -z lx-zone boot bash-2.05b# zoneadm list -vc ID NAME STATUS PATH 0 global running / 2 lx-zone running /zone/1 bash-2.05b# ping 192.168.35.210 192.168.35.210 is alive
Now you can access the zone using zlogin:
# zlogin -C -e\@ lx-zone [Connected to zone 'lx-zone' console] CentOS release 3.9 (Final) Kernel 2.4.21 on an i686 lx-zone login: -bash-2.05b# uname -a Linux lx-zone 2.4.21 BrandZ fake linux i686 i686 i386 GNU/Linux
As you can see zones are very powerful and allow a system to be divided up as you see fit. Each zone is completely isolated from the others and has its own cpu limits, process lists, network stack, etc. Even if a zone is completely wiped out it will not affect your global zone.
I've recently switched to OpenSolaris on my desktop at work and I just wanted to write a bit about my experiences.
Installation:
Installing the OS is about the same as any other unix system. Boot the CD, enter a host name, root password and select the drive you want to install to. One nice thing is that you can set up a ZFS mirror out of the box, if not you can easily mirror your pool later without having to mess around too much, one command takes care of it.
Hardware support:
All of the hardware on my computer was detected and loaded the proper drivers without me having to intervene. As long as your hardware is listed on the compatibility list you'll be fine. Setting up X with multiple monitor support is also very easy, just run the Nvidia settings app and configure your screens.
Compatibility:
One issue I did have is that mp3 support isn't included as part of the default install, you have to download the codec package from Fluendo if you want mp3 support in totem or anything else that uses the gstreamer backend. Flash also requires a manual install, the plugin is pretty easy to set up however.
Overall:
After using the system for a few weeks I'd have to say I'm impressed. If you have any experience at all with running a Linux desktop it shouldn't take long to adjust and you'll have access to zfs and dtrace which simply don't have equivalents in Linux. In short, give it a try, you might like it.
Since the C-SPAN web site crashes firefox you need to use Real Player by itself to watch live streams, just use this URL.
rtsp://rx-wes-sea74.rbn.com/farm/pull/tx-rbn-sea001:2459/farm/cspan/g2cspan/live/cspan1-g2.rm
Bad C-SPAN, no donut.
While setting up my new music server today I had a small issue to take care of, setting up play lists. While I do like having tracks play randomly most of my music is meant to be listened to as a complete album, Dark Side of the Moon, for example.
Enter python, in less than 25 lines of code I came up with a solution. The below script parses through my music directory, shuffles the albums and then creates a play list file.
#!/usr/bin/env python import commands, os from random import * cmd = "find /export/home/music -type d" dirs = commands.getoutput(cmd).split("\n") shuffle(dirs) f = open("/usr/local/etc/ices-playlist.txt", "w") for dir in dirs: cmd = "find '%s' -maxdepth 1 -type f -name '*.mp3' -print | sort" %dir output = commands.getoutput(cmd) f.write(output) f.write("\n") f.close() os.system("sed -i '/^$/d' /usr/local/etc/ices-playlist.txt")
I've been studying economics lately since I'm a stats freak and I find the economy really interesting, I also like to plan things financially so it's nice to spot where trends are going. Here's a chart of diesel prices over the last 5 years, adjusted for inflation. I already had the price data in my database so all I had to add was a table for the consumer price index and create a view to display the new data. The values are in 1982 dollars, to convert to current values just multiply them by 2.13 which is the CPI for March 2008.
Diesel Prices - Inflation Adjusted:
Last night I set up ushare to stream videos from my PC to the Xbox, it's a lot more comfortable sitting on the couch to watch movies and now I can just download anything that I want to watch.
Setting things up wasn't too difficult, I had to add an extra NIC and run a crossover cable for the connection, I also had to set up IP masquerading which only requires 4 simple iptables rules. After that stuff is done just start up ushare and point it to your video directory, the Xbox will automatically see the share and let you browse videos.
If you don't have an Xbox ushare also works with the Playstation 3 or any other UPNP or DLNA device, there's also dedicated boxes that you can buy for your TV that just need a network connection.
I could swear that Google is reading my mind. I was just thinking the other day that it would be nice if somebody made a simple, easy to use API for generating charts on web pages. Lo and behold Google goes and does it.
I've been messing with other options like PyX for python, MRTG, and gnuplot but all of them are clunky and the images they generate are ugly, the charts generated by google are just regular PNG files so you can embed them in any web page using a simple tag.
For you python users out there I've written a small python module to encode values and generate a request URL, you can find it here.
Here is a small test application that I made, it graphs my car's fuel mileage history and the cost per mile over time by pulling data from a table in postgres.
I started playing Halo 3 at my friend's place and I got hooked so I decided I had to get a system for myself.
After doing some research I discovered that most of the 'broken' systems out there are easily repairable, all you need is 8 bolts and some washers, along with some Arctic Silver for the heat sinks.
I looked around Ebay I found a place that sells broken electronics which had a bunch of systems available, I sniped one and won it for a decent price. After I got the system the first thing I did was tear it apart and proceeded with the X-clamp removal process, put it back together and everything works, no more red ring of death!
While my repairs were successful I would only reccommend doing this if you're into electronics repair and have some experience working on circuits, you can easily screw things up if you don't know what's going on.