From the creators of BackTrack comes Kali Linux, the most advanced and versatile penetration testing distribution ever created… http://www.kali.org/
Tag Archives: linux
Linux CLI reference
Command | Description | |
• | apropos whatis | Show commands pertinent to string. |
• | man -t man | ps2pdf – > man.pdf | make a pdf of a manual page |
• | which command | Show full path name of command |
• | time command | See how long a command takes |
• | time cat | Start stopwatch. Ctrl-d to stop. |
• | nice info | Run a low priority command (The “info” reader in this case) |
• | renice 19 -p $ | Make shell (script) low priority. Use for non interactive tasks |
dir navigation | ||
• | cd – | Go to previous directory |
• | cd | Go to $HOME directory |
• | (cd dir && command) | Go to dir, execute command and return to current dir |
• | pushd . | Put current dir on stack so you can popd back to it |
file searching | ||
• | alias l=’ls -l –color=auto’ | quick dir listing |
• | ls -lrt | List files by date. |
• | ls /usr/bin | pr -T9 -W$COLUMNS | Print in 9 columns to width of terminal |
• | find -name ‘*.[ch]‘ | xargs grep -E ‘expr’ | Search ‘expr’ in this dir and below. |
• | find -type f -print0 | xargs -r0 grep -F ‘example’ | Search all regular files for ‘example’ in this dir and below |
• | find -maxdepth 1 -type f | xargs grep -F ‘example’ | Search all regular files for ‘example’ in this dir |
• | find -maxdepth 1 -type d | while read dir; do echo $dir; echo cmd2; done | Process each item with multiple commands (in while loop) |
• | find -type f ! -perm -444 | Find files not readable by all (useful for web site) |
• | find -type d ! -perm -111 | Find dirs not accessible by all (useful for web site) |
• | locate -r ‘file[^/]*.txt’ | Search cached index for names. This re is like glob *file*.txt |
• | look reference | Quickly search (sorted) dictionary for prefix |
• | grep –color reference /usr/share/dict/words | Highlight occurances of regular expression in dictionary |
archives and compression | ||
• | gpg -c file | Encrypt file |
• | gpg file.gpg | Decrypt file |
• | tar -c dir/ | bzip2 > dir.tar.bz2 | Make compressed archive of dir/ |
• | bzip2 -dc dir.tar.bz2 | tar -x | Extract archive (use gzip instead of bzip2 for tar.gz files) |
• | tar -c dir/ | gzip | gpg -c | ssh user@remote ‘dd of=dir.tar.gz.gpg’ | Make encrypted archive of dir/ on remote machine |
• | find dir/ -name ‘*.txt’ | tar -c –files-from=- | bzip2 > dir_txt.tar.bz2 | Make archive of subset of dir/ and below |
• | find dir/ -name ‘*.txt’ | xargs cp -a –target-directory=dir_txt/ –parents | Make copy of subset of dir/ and below |
• | ( tar -c /dir/to/copy ) | ( cd /where/to/ && tar -x -p ) | Copy (with permissions) copy/ dir to /where/to/ dir |
• | ( cd /dir/to/copy && tar -c . ) | ( cd /where/to/ && tar -x -p ) | Copy (with permissions) contents of copy/ dir to /where/to/ |
• | ( tar -c /dir/to/copy ) | ssh -C user@remote ‘cd /where/to/ && tar -x -p’ | Copy (with permissions) copy/ dir to remote:/where/to/ dir |
• | dd bs=1M if=/dev/sda | gzip | ssh user@remote ‘dd of=sda.gz’ | Backup harddisk to remote machine |
rsync (Network efficient file copier: Use the –dry-run option for testing) | ||
• | rsync -P rsync://rsync.server.com/path/to/file file | Only get diffs. Do multiple times for troublesome downloads |
• | rsync –bwlimit=1000 fromfile tofile | Locally copy with rate limit. It’s like nice for I/O |
• | rsync -az -e ssh –delete ~/public_html/ remote.com:’~/public_html’ | Mirror web site (using compression and encryption) |
• | rsync -auz -e ssh remote:/dir/ . && rsync -auz -e ssh . remote:/dir/ | Synchronize current directory with remote one |
ssh (Secure SHell) | ||
• | ssh $USER@$HOST command | Run command on $HOST as $USER (default command=shell) |
• | ssh -f -Y $USER@$HOSTNAME xeyes | Run GUI command on $HOSTNAME as $USER |
• | scp -p -r $USER@$HOST: file dir/ | Copy with permissions to $USER’s home directory on $HOST |
• | ssh -g -L 8080:localhost:80 root@$HOST | Forward connections to $HOSTNAME:8080 out to $HOST:80 |
• | ssh -R 1434:imap:143 root@$HOST | Forward connections from $HOST:1434 in to imap:143 |
wget (multi purpose download tool) | ||
• | (cd dir/ && wget -nd -pHEKk http://www.pixelbeat.org/cmdline.html) | Store local browsable version of a page to the current dir |
• | wget -c http://www.example.com/large.file | Continue downloading a partially downloaded file |
• | wget -r -nd -np -l1 -A ‘*.jpg’ http://www.example.com/dir/ | Download a set of files to the current directory |
• | wget ftp://remote/file[1-9].iso/ | FTP supports globbing directly |
• | wget -q -O- http://www.pixelbeat.org/timeline.html | grep ‘a href’ | head | Process output directly |
• | echo ‘wget url’ | at 01:00 | Download url at 1AM to current dir |
• | wget –limit-rate=20k url | Do a low priority download (limit to 20KB/s in this case) |
• | wget -nv –spider –force-html -i bookmarks.html | Check links in a file |
• | wget –mirror http://www.example.com/ | Efficiently update a local copy of a site (handy from cron) |
networking (Note ifconfig, route, mii-tool, nslookup commands are obsolete) | ||
• | ethtool eth0 | Show status of ethernet interface eth0 |
• | ethtool –change eth0 autoneg off speed 100 duplex full | Manually set ethernet interface speed |
• | iwconfig eth1 | Show status of wireless i nterface eth1 |
• | iwconfig eth1 rate 1Mb/s fixed | Manually set wireless interface speed |
• | iwlist scan | List wireless networks in range |
• | ip link show | List network interfaces |
• | ip link set dev eth0 name wan | Rename interface eth0 to wan |
• | ip link set dev eth0 up | Bring interface eth0 up (or down) |
• | ip addr show | List addresses for interfaces |
• | ip addr add 1.2.3.4/24 brd + dev eth0 | Add (or del) ip and mask (255.255.255.0) |
• | ip route show | List routing table |
• | ip route add default via 1.2.3.254 | Set default gateway to 1.2.3.254 |
• | tc qdisc add dev lo root handle 1:0 netem delay 20msec | Add 20ms latency to loopback device (for testing) |
• | tc qdisc del dev lo root | Remove latency added above |
• | host pixelbeat.org | Lookup DNS ip address for name or vice versa |
• | hostname -i | Lookup local ip address (equivalent to host `hostname`) |
• | whois pixelbeat.org | Lookup whois info for hostname or ip address |
• | netstat -tupl | List internet services on a system |
• | netstat -tup | List active connections to/from system |
windows networking (Note samba is the package that provides all this windows specific networking support) | ||
• | smbtree | Find windows machines. See also findsmb |
• | nmblookup -A 1.2.3.4 | Find the windows (netbios) name associated with ip address |
• | smbclient -L windows_box | List shares on windows machine or samba server |
• | mount -t smbfs -o fmask=666,guest //windows_box/share /mnt/share | Mount a windows share |
• | echo ‘message’ | smbclient -M windows_box | Send popup to windows machine (off by default in XP sp2) |
text manipulation (Note sed uses stdin and stdout. Newer versions support inplace editing with the -i option) | ||
• | sed ‘s/string1/string2/g’ | Replace string1 with string2 |
• | sed ‘s/(.*)1/12/g’ | Modify anystring1 to anystring2 |
• | sed ‘/ *#/d; /^ *$/d’ | Remove comments and blank lines |
• | sed ‘:a; /\$/N; s/\n//; ta’ | Concatenate lines with trailing |
• | sed ‘s/[ t]*$//’ | Remove trailing spaces from lines |
• | sed ‘s/([`”$])/\1/g’ | Escape shell metacharacters active within double quotes |
• | seq 10 | sed “s/^/ /; s/ *(.{7,})/1/” | Right align numbers |
• | sed -n ’1000{p;q}’ | Print 1000th line |
• | sed -n ’10,20p;20q‘ | Print lines 10 to 20 |
• | sed -n ‘s/.*<title>(.*)</title>.*/1/ip;T;q‘ | Extract title from HTML web page |
• | sed -i 42d ~/.ssh/known_hosts | Delete a particular line |
• | sort -t. -k1,1n -k2,2n -k3,3n -k4,4n | Sort IPV4 ip addresses |
• | echo ‘Test’ | tr ‘[:lower:]‘ ‘[:upper:]‘ | Case conversion |
• | tr -dc ‘[:print:]‘ < /dev/urandom | Filter non printable characters |
• | history | wc -l | Count lines |
set operations (Note you can export LANG=C for speed. Also these assume no duplicate lines within a file) | ||
• | sort file1 file2 | uniq | Union of unsorted files |
• | sort file1 file2 | uniq -d | Intersection of unsorted files |
• | sort file1 file1 file2 | uniq -u | Difference of unsorted files |
• | sort file1 file2 | uniq -u | Symmetric Difference of unsorted files |
• | join -t’′ -a1 -a2 file1 file2 | Union of sorted files |
• | join -t’′ file1 file2 | Intersection of sorted files |
• | join -t’′ -v2 file1 file2 | Difference of sorted files |
• | join -t’′ -v1 -v2 file1 file2 | Symmetric Difference of sorted files |
math | ||
• | echo ‘(1 + sqrt(5))/2′ | bc -l | Quick math (Calculate ?). |
• | echo ‘pad=20; min=64; (100*10^6)/((pad+min)*8)’ | bc | More complex (int) e.g. This shows max FastE packet rate |
• | echo ‘pad=20; min=64; print (100E6)/((pad+min)*8)’ | python | Python handles scientific notation |
• | echo ‘pad=20; plot [64:1518] (100*10**6)/((pad+x)*8)’ | gnuplot -persist | Plot FastE packet rate vs packet size |
• | echo ‘obase=16; ibase=10; 64206′ | bc | Base conversion (decimal to hexadecimal) |
• | echo $((0x2dec)) | Base conversion (hex to dec) ((shell arithmetic expansion)) |
• | units -t ’100m/9.58s’ ‘miles/hour’ | Unit conversion (metric to imperial) |
• | units -t ’500GB’ ‘GiB’ | Unit conversion (SI to IEC prefixes) |
• | units -t ’1 googol’ | Definition lookup |
• | seq 100 | (tr ‘n’ +; echo 0) | bc | Add a column of numbers. |
calendar | ||
• | cal -3 | Display a calendar |
• | cal 9 1752 | Display a calendar for a particular month year |
• | date -d fri | What date is it this friday. |
• | [ $(date -d “tomorrow” +%d) = “01” ] || exit | exit a script unless it’s the last day of the month |
• | date –date=’25 Dec’ +%A | What day does xmas fall on, this year |
• | date –date=’@2147483647′ | Convert seconds since the epoch (1970-01-01 UTC) to date |
• | TZ=’America/Los_Angeles’ date | What time is it on west coast of US (use tzselect to find TZ) |
• | date –date=’TZ=”America/Los_Angeles” 09:00 next Fri’ | What’s the l ocal time for 9AM next Friday on west coast US |
• | echo “mail -s ‘get the train’ P@draigBrady.com < /dev/null” | at 17:45 | Email reminder |
• | echo “DISPLAY=$DISPLAY xmessage cooker” | at “NOW + 30 minutes” | Popup reminder |
locales | ||
• | printf “%’dn” 1234 | Print number with thousands grouping appropriate to locale |
• | BLOCK_SIZE=’1 ls -l | get ls to do thousands grouping appropriate to locale |
• | echo “I live in `locale territory`” | Extract info from locale database |
• | LANG=en_IE.utf8 locale int_prefix | Lookup locale info for specific country. |
• | locale | cut -d= -f1 | xargs locale -kc | less | List fields available in locale database |
recode (Obsoletes iconv, dos2unix, unix2dos) | ||
• | recode -l | less | Show available conversions (aliases on each line) |
• | recode windows-1252.. file_to_change.txt | Windows “ansi” to local charset (auto does CRLF conversion) |
• | recode utf-8/CRLF.. file_to_change.txt | Windows utf8 to local charset |
• | recode iso-8859-15..utf8 file_to_change.txt | Latin9 (western europe) to utf8 |
• | recode ../b64 < file.txt > file.b64 | Base64 encode |
• | recode /qp.. < file.qp > file.txt | Quoted printable decode |
• | recode ..HTML < file.txt > file.html | Text to HTML |
• | recode -lf windows-1252 | grep euro | Lookup table of characters |
• | echo -n 0×80 | recode latin-9/x1..dump | Show what a code represents in latin-9 charmap |
• | echo -n 0x20AC | recode ucs-2/x2..latin-9/x | Show latin-9 encoding |
• | echo -n 0x20AC | recode ucs-2/x2..utf-8/x | Show utf-8 encoding |
CDs | ||
• | gzip < /dev/cdrom > cdrom.iso.gz | Save copy of data cdrom |
• | mkisofs -V LABEL -r dir | gzip > cdrom.iso.gz | Create cdrom image from contents of dir |
• | mount -o loop cdrom.iso /mnt/dir | Mount the cdrom image at /mnt/dir (read only) |
• | cdrecord -v dev=/dev/cdrom blank=fast | Clear a CDRW |
• | gzip -dc cdrom.iso.gz | cdrecord -v dev=/dev/cdrom – | Burn cdrom image (use dev=ATAPI -scanbus to confirm dev) |
• | cdparanoia -B | Rip audio tracks from CD to wav files in current dir |
• | cdrecord -v dev=/dev/cdrom -audio -pad *.wav | Make audio CD from all wavs in current dir (see also cdrdao) |
• | oggenc –tracknum=’track’ track.cdda.wav -o ‘track.ogg’ | Make ogg file from wav file |
disk space | ||
• | ls -lSr | Show files by size, biggest last |
• | du -s * | sort -k1,1rn | head | Show top disk users in current dir. |
• | df -h | Show free space on mounted filesystems |
• | df -i | Show free inodes on mounted filesystems |
• | fdisk -l | Show disks partitions sizes and types (run as root) |
• | rpm -q -a –qf ‘%10{SIZE}t%{NAME}n’ | sort -k1,1n | List all packages by installed size (Bytes) on rpm distros |
• | dpkg-query -W -f=’${Installed-Size;10}t${Package}n’ | sort -k1,1n | List all packages by installed size (KBytes) on deb distros |
• | dd bs=1 seek=2TB if=/dev/null of=ext3.test | Create a large test file (taking no space). |
• | > file | truncate data of file or create an empty file |
monitoring/debugging | ||
• | tail -f /var/log/messages | Monitor messages in a log file |
• | strace -c ls >/dev/null | Summarise/profile system calls made by command |
• | strace -f -e open ls >/dev/null | List system calls made by command |
• | ltrace -f -e getenv ls >/dev/null | List library calls made by command |
• | lsof -p $ | List paths that process id has open |
• | lsof ~ | List processes that have specified path open |
• | tcpdump not port 22 | Show network traffic except ssh. |
• | ps -e -o pid,args –forest | List processes in a hierarchy |
• | ps -e -o pcpu,cpu,nice,state,cputime,args –sort pcpu | sed ‘/^ 0.0 /d’ | List processes by % cpu usage |
• | ps -e -orss=,args= | sort -b -k1,1n | pr -TW$COLUMNS | List processes by mem (KB) usage. |
• | ps -C firefox-bin -L -o pid,tid,pcpu,state | List all threads for a particular process |
• | ps -p 1,2 | List info for particular process IDs |
• | last reboot | Show system reboot history |
• | free -m | Show amount of (remaining) RAM (-m displays in MB) |
• | watch -n.1 ‘cat /proc/interrupts’ | Watch changeable data continuously |
system information (‘#’ means root access is required) | ||
• | uname -a | Show kernel version and system architecture |
• | head -n1 /etc/issue | Show name and version of distribution |
• | cat /proc/partitions | Show all partitions registered on the system |
• | grep MemTotal /proc/meminfo | Show RAM total seen by the system |
• | grep “model name” /proc/cpuinfo | Show CPU(s) info |
• | lspci -tv | Show PCI info |
• | lsusb -tv | Show USB info |
• | mount | column -t | List mounted filesystems on the system (and align output) |
• | grep -F capacity: /proc/acpi/battery/BAT0/info | Show state of cells in laptop battery |
# | dmidecode -q | less | Display SMBIOS/DMI information |
# | smartctl -A /dev/sda | grep Power_On_Hours | How long has this disk (system) been powered on in total |
# | hdparm -i /dev/sda | Show info about disk sda |
# | hdparm -tT /dev/sda | Do a read speed test on disk sda |
# | badblocks -s /dev/sda | Test for unreadable blocks on disk sda |
interactive | ||
• | readline | Line editor used by bash, python, bc, gnuplot, … |
• | screen | Virtual terminals with detach capability, … |
• | mc | Powerful file manager that can browse rpm, tar, ftp, ssh, … |
• | gnuplot | Interactive/scriptable graphing |
• | links | Web browser |
• | xdg-open . | open a file or url with the registered desktop application |
miscellaneous | ||
• | alias hd=’od -Ax -tx1z -v’ | Handy hexdump. (usage e.g.: • hd /proc/self/cmdline | less) |
• | alias realpath=’readlink -f’ | Canonicalize path. (usage e.g.: • realpath ~/../$USER) |
• | set | grep $USER | Search current environment |
• | touch -c -t 0304050607 file | Set file timestamp (YYMMDDhhmm) |
• | python -m SimpleHTTPServer | Serve current directory tree at http://$HOSTNAME:8000/ |
This is just a repost from https://www.sanctuarydatasystems.co.uk/data-backup/linux-cli-reference/
Lazy Linux: 10 essential tricks for admins (by Vallard Benincosa via developerWorks)
The best systems administrators are set apart by their efficiency. And if an efficient systems administrator can do a task in 10 minutes that would take another mortal two hours to complete, then the efficient systems administrator should be rewarded (paid more) because the company is saving time, and time is money, right?
The trick is to prove your efficiency to management. While I won’t attempt to cover that trick in this article, I will give you 10 essential gems from the lazy admin’s bag of tricks. These tips will save you time—and even if you don’t get paid more money to be more efficient, you’ll at least have more time to play Halo.
Trick 1: Unmounting the unresponsive DVD drive
The newbie states that when he pushes the Eject button on the DVD drive of a server running a certain Redmond-based operating system, it will eject immediately. He then complains that, in most enterprise Linux servers, if a process is running in that directory, then the ejection won’t happen. For too long as a Linux administrator, I would reboot the machine and get my disk on the bounce if I couldn’t figure out what was running and why it wouldn’t release the DVD drive. But this is ineffective.
Here’s how you find the process that holds your DVD drive and eject it to your heart’s content: First, simulate it. Stick a disk in your DVD drive, open up a terminal, and mount the DVD drive:
# mount /media/cdrom
# cd /media/cdrom
# while [ 1 ]; do echo "All your drives are belong to us!"; sleep 30; done
Now open up a second terminal and try to eject the DVD drive:
# eject
You’ll get a message like:
umount: /media/cdrom: device is busy
Before you free it, let’s find out who is using it.
# fuser /media/cdrom
You see the process was running and, indeed, it is our fault we can not eject the disk.
Now, if you are root, you can exercise your godlike powers and kill processes:
# fuser -k /media/cdrom
Boom! Just like that, freedom. Now solemnly unmount the drive:
# eject
fuser
is good.
Trick 2: Getting your screen back when it’s hosed
Try this:
# cat /bin/cat
Behold! Your terminal looks like garbage. Everything you type looks like you’re looking into the Matrix. What do you do?
You type reset
. But wait you say, typing reset
is too close to typing reboot
or shutdown
. Your palms start to sweat—especially if you are doing this on a production machine.
Rest assured: You can do it with the confidence that no machine will be rebooted. Go ahead, do it:
# reset
Now your screen is back to normal. This is much better than closing the window and then logging in again, especially if you just went through five machines to SSH to this machine.
Trick 3: Collaboration with screen
David, the high-maintenance user from product engineering, calls: “I need you to help me understand why I can’t compile supercode.c on these new machines you deployed.”
“Fine,” you say. “What machine are you on?”
David responds: ” Posh.” (Yes, this fictional company has named its five production servers in honor of the Spice Girls.) OK, you say. You exercise your godlike root powers and on another machine become David:
# su - david
Then you go over to posh:
# ssh posh
Once you are there, you run:
# screen -S foo
Then you holler at David:
“Hey David, run the following command on your terminal: # screen -x foo
.”
This will cause your and David’s sessions to be joined together in the holy Linux shell. You can type or he can type, but you’ll both see what the other is doing. This saves you from walking to the other floor and lets you both have equal control. The benefit is that David can watch your troubleshooting skills and see exactly how you solve problems.
At last you both see what the problem is: David’s compile script hard-coded an old directory that does not exist on this new server. You mount it, recompile, solve the problem, and David goes back to work. You then go back to whatever lazy activity you were doing before.
The one caveat to this trick is that you both need to be logged in as the same user. Other cool things you can do with the screen
command include having multiple windows and split screens. Read the man pages for more on that.
But I’ll give you one last tip while you’re in your screen
session. To detach from it and leave it open, type: Ctrl-A D
. (I mean, hold down the Ctrl key and strike the A key. Then push the D key.)
You can then reattach by running the screen -x foo
command again.
Trick 4: Getting back the root password
You forgot your root password. Nice work. Now you’ll just have to reinstall the entire machine. Sadly enough, I’ve seen more than a few people do this. But it’s surprisingly easy to get on the machine and change the password. This doesn’t work in all cases (like if you made a GRUB password and forgot that too), but here’s how you do it in a normal case with a Cent OS Linux example.
First reboot the system. When it reboots you’ll come to the GRUB screen as shown in Figure 1. Move the arrow key so that you stay on this screen instead of proceeding all the way to a normal boot.
Figure 1. GRUB screen after reboot
Next, select the kernel that will boot with the arrow keys, and type E to edit the kernel line. You’ll then see something like Figure 2:
Figure 2. Ready to edit the kernel line
Use the arrow key again to highlight the line that begins with kernel
, and press E to edit the kernel parameters. When you get to the screen shown in Figure 3, simply append the number 1
to the arguments as shown in Figure 3:
Figure 3. Append the arg
ument with the number 1
Then press Enter, B, and the kernel will boot up to single-user mode. Once here you can run the passwd
command, changing password for user root:
sh-3.00# passwd
New UNIX password:
Retype new UNIX password:
passwd: all authentication tokens updated successfully
Now you can reboot, and the machine will boot up with your new password.
Many times I’ll be at a site where I need remote support from someone who is blocked on the outside by a company firewall. Few people realize that if you can get out to the world through a firewall, then it is relatively easy to open a hole so that the world can come into you.
In its crudest form, this is called “poking a hole in the firewall.” I’ll call it an SSH back door. To use it, you’ll need a machine on the Internet that you can use as an intermediary.
In our example, we’ll call our machine blackbox.example.com. The machine behind the company firewall is called ginger. Finally, the machine that technical support is on will be called tech. Figure 4 explains how this is set up.
Figure 4. Poking a hole in the firewall
Here’s how to proceed:
- Check that what you’re doing is allowed, but make sure you ask the right people. Most people will cringe that you’re opening the firewall, but what they don’t understand is that it is completely encrypted. Furthermore, someone would need to hack your outside machine before getting into your company. Instead, you may belong to the school of “ask-for-forgiveness-instead-of-permission.” Either way, use your judgment and don’t blame me if this doesn’t go your way.
- SSH from ginger to blackbox.example.com with the
-R
flag. I’ll assume that you’re the root user on ginger and that tech will need the root user ID to help you with the system. With the-R
flag, you’ll forward instructions of port 2222 on blackbox to port 22 on ginger. This is how you set up an SSH tunnel. Note that only SSH traffic can come into ginger: You’re not putting ginger out on the Internet naked.You can do this with the following syntax:
~# ssh -R 2222:localhost:22 thedude@blackbox.example.com
Once you are into blackbox, you just need to stay logged in. I usually enter a command like:
thedude@blackbox:~$ while [ 1 ]; do date; sleep 300; done
to keep the machine busy. And minimize the window.
- Now instruct your friends at tech to SSH as thedude into blackbox without using any special SSH flags. You’ll have to give them your password:
root@tech:~# ssh thedude@blackbox.example.com
. - Once tech is on the blackbox, they can SSH to ginger using the following command:
thedude@blackbox:~$: ssh -p 2222 root@localhost
- Tech will then be prompted for a password. They should enter the root password of ginger.
- Now you and support from tech can work together and solve the problem. You may even want to use screen together! (See Trick 4.)
Trick 6: Remote VNC session through an SSH tunnel
VNC or virtual network computing has been around a long time. I typically find myself needing to use it when the remote server has some type of graphical program that is only available on that server.
For example, suppose in Trick 5, ginger is a storage server. Many storage devices come with a GUI program to manage the storage controllers. Often these GUI management tools need a direct connection to the storage through a network that is at times kept in a private subnet. Therefore, the only way to access this GUI is to do it from ginger.
You can try SSH’ing to ginger with the -X
option and launch it that way, but many times the bandwidth required is too much and you’ll get frustrated waiting. VNC is a much more network-friendly tool and is readily available for nearly all operating systems.
Let’s assume that the setup is the same as in Trick 5, but you want tech to be able to get VNC access instead of SSH. In this case, you’ll do something similar but forward VNC ports instead. Here’s what you do:
- Start a VNC server session on ginger. This is done by running something like:
root@ginger:~# vncserver -geometry 1024x768 -depth 24 :99
The options tell the VNC server to start up with a resolution of 1024×768 and a pixel depth of 24 bits per pixel. If you are using a really slow connection setting, 8 may be a better option. Using
:99
specifies the port the VNC server will be accessible from. The VNC protocol starts at 5900 so specifying:99
means the server is accessible from port 5999.When you start the session, you’ll be asked to specify a password. The user ID will be the same user that you launched the VNC server from. (In our case, this is root.)
- SSH from ginger to blackbox.example.com forwarding the port 5999 on blackbox to ginger. This is done from ginger by running the command:
root@ginger:~# ssh -R 5999:localhost:5999 thedude@blackbox.example.com
Once you run this command, you’ll need to keep this SSH session open in order to keep the port forwarded to ginger. At this point if you were on blackbox, you could now access the VNC session on ginger by just running:
thedude@blackbox:~$ vncviewer localhost:99
That would forward the port through SSH to ginger. But we’re interested in letting tech get VNC access to ginger. To accomplish this, you’ll need another tunnel.
- From tech, you open a tunnel via SSH to forward your port 5999 to port 5999 on blackbox. This would be done by running:
root@tech:~# ssh -L 5999:localhost:5999 thedude@blackbox.example.com
This time the SSH flag we used was
-L
, which instead of pushing 5999 to blackbox, pulled from it. Once you are in on blackbox, you’ll need to leave this session open. Now you’re ready to VNC from tech! - From tech, VNC to ginger by running the command:
root@tech:~# vncviewer localhost:99
.Tech will now have a VNC session directly to ginger.
While the effort might seem like a bit much to set up, it beats flying across the country to fix the storage arrays. Also, if you practice this a few times, it becomes quite easy.
Let me add a trick to this trick: If tech was running the Windows® operating system and didn’t have a command-line SSH client, then tech can run Putty. Putty can be set to forward SSH ports by looking in the options in the sidebar. If the port were 5902 instead of our example of 5999, then you would enter something like in Figure 5.
Figure 5. Putty can forward SSH ports for tunneling
If this were set up, then tech could VNC to localhost:2 just as if tech were running the Linux operating system.
Trick 7: Checking your bandwidth
Imagine this: Company A has a storage server named ginger and it is being NFS-mounted by a client node named beckham. Company A has decided they really want to get more bandwidth out of ginger because they have lots of nodes they want to have NFS mount ginger’s shared filesystem.
The most common and cheapest way to do this is to bond two Gigabit ethernet NICs together. This is cheapest because usually you have an extra on-board NIC and an extra port on your switch somewhere.
So they do this. But now the question is: How much bandwidth do they really have?
Gigabit Ethernet has a theoretical limit of 128MBps. Where does that number come from? Well,
1Gb = 1024Mb; 1024Mb/8 = 128MB; “b” = “bits,” “B” = “bytes”
But what is it that we actually see, and what is a good way to measure it? One tool I suggest is iperf. You can grab iperf like this:
# wget http://dast.nlanr.net/Projects/Iperf2.0/iperf-2.0.2.tar.gz
You’ll need to install it on a shared filesystem that both ginger and beckham can see. or compile and install on both nodes. I’ll compile it in the home directory of the bob user that is viewable on both nodes:
tar zxvf iperf*gz
cd iperf-2.0.2
./configure -prefix=/home/bob/perf
make
make install
On ginger, run:
# /home/bob/perf/bin/iperf -s -f M
This machine will act as the server and print out performance speeds in MBps.
On the beckham node, run:
# /home/bob/perf/bin/iperf -c ginger -P 4 -f M -w 256k -t 60
You’ll see output in both screens telling you what the speed is. On a normal server with a Gigabit Ethernet adapter, you will probably see about 112MBps. This is normal as bandwidth is lost in the TCP stack and physical cables. By connecting two servers back-to-back, each with two bonded Ethernet cards, I got about 220MBps.
In reality, what you see with NFS on bonded networks is around 150-160MBps. Still, this gives you a good indication that your bandwidth is going to be about what you’d expect. If you see something much less, then you should check for a problem.
I recently ran into a case in which the bonding driver was used to bond two NICs that used different drivers. The performance was extremely poor, leading to about 20MBps in bandwidth, less than they would have gotten had they not bonded the Ethernet cards together!
Trick 8: Command-line scripting and utilities
A Linux systems administrator becomes more efficient by using command-line scripting with authority. This includes crafting loops and knowing how to parse data using utilities like awk
, grep
, and sed
. There are many cases where doing so takes fewer keystrokes and lessens the likelihood of user errors.
For example, suppose you need to generate a new /etc/hosts file for a Linux cluster that you are about to install. The long way would be to add IP addresses in vi or your favorite text editor. However, it can be done by taking the already existing /etc/hosts file and appending the following to it by running this on the command line:
# P=1; for i in $(seq -w 200); do echo "192.168.99.$P n$i"; P=$(expr $P + 1);
done >>/etc/hosts
Two hundred host names, n001 through n200, will then be created with IP addresses 192.168.99.1 through 192.168.99.200. Populating a file like this by hand runs the risk of inadvertently creating duplicate IP addresses or host names, so this is a good example of using the built-in command line to eliminate user errors. Please note that this is done in the bash shell, the default in most Linux distributions.
As another example, let’s suppose you want to check that the memory size is the same in each of the compute nodes in the Linux cluster. In most cases of this sort, having a distributed or parallel shell would be the best practice, but for the sake of illustration, here’s a way to do this using SSH.
Assume the SSH is set up to authenticate without a password. Then run:
# for num in $(seq -w 200); do ssh n$num free -tm | grep Mem | awk '{print $2}';
done | sort | uniq
A command line like this looks pretty terse. (It can be worse if you put regular expressions in it.) Let’s pick it apart and uncover the mystery.
First you’re doing a loop through 001-200. This padding with 0s in the front is done with the -w
option to the seq
command. Then you substitute the num
variable to create the host you’re going to SSH to. Once you have the target host, give the command to it. In this case, it’s:
free -m | grep Mem | awk '{print $2}'
That command says to:
- Use the
free
command to get the memory size in megabytes. - Take the output of that command and use
grep
to get the line that has the stringMem
in it. - Take that line and use
awk
to print the second field, which is the total memory in the node.
This operation is performed on every node.
Once you have performed the command on every node, the entire output of all 200 nodes is piped (|
d) to the sort
command so that all the memory values are sorted.
Finally, you eliminate duplicates with the uniq
command. This command will result in one of the following cases:
- If all the nodes, n001-n200, have the same memory size, then only one number will be displayed. This is the size of memory as seen by each operating system.
- If node memory size is different, you will s
ee several memory size values. - Finally, if the SSH failed on a certain node, then you may see some error messages.
This command isn’t perfect. If you find that a value of memory is different than what you expect, you won’t know on which node it was or how many nodes there were. Another command may need to be issued for that.
What this trick does give you, though, is a fast way to check for something and quickly learn if something is wrong. This is it’s real value: Speed to do a quick-and-dirty check.
Trick 9: Spying on the console
Some software prints error messages to the console that may not necessarily show up on your SSH session. Using the vcs devices can let you examine these. From within an SSH session, run the following command on a remote server: # cat /dev/vcs1
. This will show you what is on the first console. You can also look at the other virtual terminals using 2, 3, etc. If a user is typing on the remote system, you’ll be able to see what he typed.
In most data farms, using a remote terminal server, KVM, or even Serial Over LAN is the best way to view this information; it also provides the additional benefit of out-of-band viewing capabilities. Using the vcs device provides a fast in-band method that may be able to save you some time from going to the machine room and looking at the console.
Trick 10: Random system information collection
In Trick 8, you saw an example of using the command line to get information about the total memory in the system. In this trick, I’ll offer up a few other methods to collect important information from the system you may need to verify, troubleshoot, or give to remote support.
First, let’s gather information about the processor. This is easily done as follows:
# cat /proc/cpuinfo
.
This command gives you information on the processor speed, quantity, and model. Using grep
in many cases can give you the desired value.
A check that I do quite often is to ascertain the quantity of processors on the system. So, if I have purchased a dual processor quad-core server, I can run:
# cat /proc/cpuinfo | grep processor | wc -l
.
I would then expect to see 8 as the value. If I don’t, I call up the vendor and tell them to send me another processor.
Another piece of information I may require is disk information. This can be gotten with the df
command. I usually add the -h
flag so that I can see the output in gigabytes or megabytes. # df -h
also shows how the disk was partitioned.
And to end the list, here’s a way to look at the firmware of your system—a method to get the BIOS level and the firmware on the NIC.
To check the BIOS version, you can run the dmidecode
command. Unfortunately, you can’t easily grep
for the information, so piping it is a less efficient way to do this. On my Lenovo T61 laptop, the output looks like this:
#dmidecode | less
...
BIOS Information
Vendor: LENOVO
Version: 7LET52WW (1.22 )
Release Date: 08/27/2007
...
This is much more efficient than rebooting your machine and looking at the POST output.
To examine the driver and firmware versions of your Ethernet adapter, run ethtool
:
# ethtool -i eth0
driver: e1000
version: 7.3.20-k2-NAPI
firmware-version: 0.3-0
There are thousands of tricks you can learn from someone’s who’s an expert at the command line. The best ways to learn are to:
- Work with others. Share screen sessions and watch how others work—you’ll see new approaches to doing things. You may need to swallow your pride and let other people drive, but often you can learn a lot.
- Read the man pages. Seriously; reading man pages, even on commands you know like the back of your hand, can provide amazing insights. For example, did you know you can do network programming with
awk
? - Solve problems. As the system administrator, you are always solving problems whether they are created by you or by others. This is called experience, and experience makes you better and more efficient.
I hope at least one of these tricks helped you learn something you didn’t know. Essential tricks like these make you more efficient and add to your experience, but most importantly, tricks give you more free time to do more interesting things, like playing video games. And the best administrators are lazy because they don’t like to work. They find the fastest way to do a task and finish it quickly so they can continue in their lazy pursuits.
Learn
- Read the Linux Professional Institute exam prep series on developerWorks for a solid grounding in the basics to complement these tricks.
- See “Sharing computers on a Linux (or heterogeneous) network, Part 1” (developerWorks, Dec 2001) for more discussion of SSH and VNC.
- In the developerWorks Linux zone, find more resources for Linux developers, and scan our most popular articles and tutorials.
- See all Linux tips and Linux tutorials on developerWorks.
- Stay current with developerWorks technical events and Webcasts.
Get products and technologies
- Order the SEK for Linux, a two-DVD set containing the latest IBM trial software for Linux from DB2®, Lotus®, Rational®, Tivoli®
, and WebSphere®. - With IBM trial software, available for download directly from developerWorks, build your next development project on Linux.
Discuss
- Get involved in the developerWorks community through blogs, forums, podcasts, and spaces.
Vallard Benincosa is a lazy Linux Certified IT professional working for the IBM Linux Clusters team. He lives in Portland, OR, with his wife and two kids.
Original article: http://www.ibm.com/developerworks/linux/library/l-10sysadtips/index.html
9 Ways to Make Linux More Secure (by Mark Sanborn via nixtutor.com)
The Linux operating system has already been proven to be very reliable and secure. It is often the most popular operating system found on web servers largely accredited to its track record in security, but can it be improved?
1. Use SELinux
Security Enhanced Linux was originally developed for The National Security Agency and is now merged with the 2.6 kernel to provide some additional security measures to the Linux operating system. Enabling SELinux is probably one of the most important things you can do if you care about creating a ridiculously secure operating system.
“While problems with the correctness or configuration of applications may allow the limited compromise of individual user programs and system daemons, they do not pose a threat to the security of other user programs and system daemons or to the security of the system as a whole.” SELinux
Although SELinux is one of the best things you can do in regards to security, it may not be right for everyone. The main criticism to SELinux is the difficulty in setting up and maintaining the system.
Fedora comes with SELinux enabled by default.
2. Subscribe to a Vulnerability Alert Service
Often times it is not the operating system itself that is vulnerable. Vulnerabilities are usually found in the applications and additional services that are installed on the system itself. One of the best ways to stay secure is to make sure you have the latest version of the application and that there are no known vulnerabilities for the version you have.
Here are some of my favorite alert services:
- RSS: SecurityFocus
- Mailing List: Bugtraq
If you find your email/rss reader is filling up with too many vulnerabilities that down effect the applications you are using, check out OSVDB and subscribe to vulnerability alerts for only the applications that you use.
3. Disable Unused Services and Applications
We know that applications are almost always the cause of vulnerabilities and for this reason it is best to disable anything that you don’t use. OpenBSD is touted as one of the most secure distributions in existence. According to OpenBSD and their philosophy
All non-essential services are disabled. OpenBSD claims, “Only two remote holes in the default install, in a heck of a long time!“. Disabling unneeded services and applications is a huge contributor to OpenBSD’s security record.
Learn from one of the most secure operating system and disable services that you are not using.
4. Check System Logs
If you are subscribed to NixTutor you should have a pretty good grasp on how to monitor logs and search through them. Checking system logs will often be the first way to check if a system has been compromised or malicious activity is afoot.
Here is a recent example where someone was trying to login to an FTP service with an automated script.
Tue May 19 18:01:49 2009 [pid 2277] CONNECT: Client “206.155.47.130″
Tue May 19 18:01:52 2009 [pid 2276] [Administrator] FAIL LOGIN: Client “206.155.47.130″
Tue May 19 18:01:55 2009 [pid 2276] [Administrator] FAIL LOGIN: Client “206.155.47.130″
How do we stop this kind of automated attack? Well, one solution would be port knocking.
5. Consider Port Knocking
In a nutshell port knocking is a way of opening pre-defined ports on a system remotely using a secret “knock”. The knock consists of sending a special packet to a specific port in a secret sequence. Once the special sequence of packets have been sent the server will then open a port for your IP address.
If you have open ports consider adding another level of protection with port knocking. I wrote about setting up port knocking in Linux and FreeBSD in the past. Port knocking is a really cool solution to prevent automated attacks against known applications. It virtually stops automated scripts and port scanners completely.
The only problem with port knocking is that it isn’t really suited for public access. For example if you are going to run the Apache webserver it wouldn’t make sense to make the client go through a port knock just to visit the site; however, if your intention is to hide the fact that you have a remote access server like SSH running port knocking is wonderful.
6. Use Iptables
Iptables is a packet inspection framework that is included in the Linux kernel that will allow you to build a state of the art firewall in Linux. Many of modern day routers are simply embedded Linux machines with iptables, like the Linksys WRT54G.
Learning how to write good firewall rules has a fairly steep learning curve but it is worth learning. Of course if you don’t have the time but still want to take advantage of the security iptables can bring check out, FirewallBuilder.
FirewallBuilder is basically a GUI for iptables (netfilter), ipfilter, pf, ipfw, Cisco PIX (FWSM, ASA) and Cisco routers extended access lists.
7. Deny All by Default
There are two schools of thought when it comes to creating firewall rules. One way is to allow everything by default and then restrict access to certain ports and applications. This is almost always the way firewalls are setup as it is the easiest to setup and maintain. Allowing all by default is nice for system admins because everything just works, and there are no user complaints to deal with.
The other method of setting up firewalls is to deny all incoming and outgoing traffic by default, only allowing approved traffic through by creating exceptions. This is a much better way but requires a lot of thought and planning of which types of traffic you are going to allow.
If you care about security, take the extra time it takes to develop a deny by default plan.
A default deny would look something like this:
iptables -P INPUT DROP
iptables -P OUTPUT DROP
iptables -P FORWARD DROP
8. Use an Intrusion Detection System
An Intrusion Detection System or IDS is a great way to monitor malicious hacking attempts on your machine. The idea of an IDS is to log traffic and look for certain patterns that are known to be used for malicious purposes. When the IDS detects malicious traffic it will log and notify you. This allows you to tweak your firewall to block that type of access or adjust your policies to deal with the attack.
An IDS is not usually setup to block attacks but rather log them and keep tabs on what attackers are doing. It is up you, the administrator, to refine your firewall rules to block uninvited access. Using a deny by default policy will make refining rules much eas
ier.
Snort is a great IDS for Linux machines and claims to be the de facto standard for intrusion detection/prevention.
9. Use Full Drive Encryption
According to the 2006 Security Breaches Matrix, a large number of the data leaks were caused due to stolen/missing laptops. If the data was encrypted these data leaks could have been prevented.
If you have a mobile device or paranoid about security full drive encryption provides peace of mind that your data is yours and only yours.
Distributions like Fedora and Ubuntu are offering full drive encryption options when you install the OS. Hard drive manufactures are even starting to build encryption right into the hard drives.
10. Your Favorite Security Tip
Number 10 is left to you. What is your favorite way to make Linux more secure? Leave your tip in the comments below!
Original post: http://www.nixtutor.com/freebsd/9-ways-to-make-linux-more-secure/
25 More Sick Linux Commands (by Isaiah via blog.urfix.com)
You Might remember my post 25 best Linux commands Think of this as part two. here is another list of really useful commands that you might find handy.
1) Like top, but for files
watch -d -n 2 ‘df; ls -FlAt;’
2) Download an entire website
wget –random-wait -r -p -e robots=off -U mozilla http://www.example.com
-p parameter tells wget to include all files, including images.
-e robots=off you don’t want wget to obey by the robots.txt file
-U mozilla as your browsers identity.
–random-wait to let wget chose a random number of seconds to wait, avoid get into black list.
Other Useful wget Parameters:
–limit-rate=20k limits the rate at which it downloads files.
-b continues wget after logging out.
-o $HOME/wget_log.txt logs the output
3) List the size (in human readable form) of all sub folders from the current location
du -h –max-depth=1
4) A very simple and useful stopwatch
time read (ctrl-d to stop)
time read -sn1 (s:silent, n:number of characters. Press any character to stop)
5) Quick access to the ascii table.
man ascii
6) Shutdown a Windows machine from Linux
net rpc shutdown -I ipAddressOfWindowsPC -U username%password
This will issue a shutdown command to the Windows machine. username must be an administrator on the Windows machine. Requires samba-common package installed. Other relevant commands are:
net rpc shutdown -r : reboot the Windows machine
net rpc abortshutdown : abort shutdown of the Windows machine
Type:
net rpc
to show all relevant commands
7) Jump to a directory, execute a command and jump back to current dir
(cd /tmp && ls)
8) Display the top ten running processes – sorted by memory usage
ps aux | sort -nk +4 | tail
ps returns all running processes which are then sorted by the 4th field in numerical order and the top 10 are sent to STDOUT.
9) List of commands you use most often
history | awk ‘{a[$2]++}END{for(i in a){print a[i] ” ” i}}’ | sort -rn | head
10) Reboot machine when everything is hanging (raising a skinny elephant)
<alt> + <print screen/sys rq> + <R> – <S> – <E> – <I> – <U> – <B>
If the machine is hanging and the only help would be the power button, this key-combination will help to reboot your machine (more or less) gracefully.
R – gives back control of the keyboard
S – issues a sync
E – sends all processes but init the term singal
I – sends all processes but init the kill signal
U – mounts all filesystem ro to prevent a fsck at reboot
B – reboots the system
Save your file before trying this out, this will reboot your machine without warning!
http://en.wikipedia.org/wiki/Magic_SysRq_key
11) Make ‘less’ behave like ‘tail -f’
less +F somelogfile
Using +F will put less in follow mode. This works similar to ‘tail -f’. To stop scrolling, use the interrupt. Then you’ll get the normal benefits of less (scroll, etc.).
Pressing SHIFT-F will resume the ‘tailling’.
12) Set audible alarm when an IP address comes online
ping -i 60 -a IP_address
Waiting for your server to finish rebooting? Issue the command above and you will hear a beep when it comes online. The -i 60 flag tells ping to wait for 60 seconds between ping, putting less strain on your system. Vary it to your need. The -a flag tells ping to include an audible bell in the output when a package is received (that is, when your server comes online).
13) Backticks are evil
echo “The date is: $(date +%D)”
This is a simple example of using proper command nesting using $() over “. There are a number of advantages of $() over backticks. First, they can be easily nested without escapes:
program1 $(program2 $(program3 $(program4)))
versus
program1 `program2 `program3 `program4```
Second, they’re easier to read, then trying to decipher the difference between the backtick and the singlequote: `’. The only drawback $() suffers from is lack of total portability. If your script must be portable to the archaic Bourne shell, or old versions of the C-shell or Korn shell, then backticks are appropriate, otherwise, we should all get into the habit of $(). Your future script maintainers will thank you for producing cleaner code.
14) Simulate typing
echo “You can simulate on-screen typing just like in the movies” | pv -qL 10
This will output the characters at 10 per second.
15) python smtp server
python -m smtpd -n -c DebuggingServer localhost:1025
This command will start a simple SMTP server listening on port 1025 of localhost. This server simply prints to standard output all email headers and the email body.
16) Watch Network Service Activity in Real-time
lsof -i
17) diff two unsorted files without creating temporary files
diff <(sort file1) <(sort file2)
bash/ksh subshell redirection (as file descriptors) used as input to diff
18) Rip audio from a video file.
mplayer -ao pcm -vo null -vc dummy -dumpaudio -dumpfile <output-file> <input-file>
replace accordingly
19) Matrix Style
tr -c “[:digit:]” ” ” < /dev/urandom | dd cbs=$COLUMNS conv=unblock | GREP_COLOR=”1;32″ grep –color “[^ ]“
20) This command will show you all the string (plain text) values in ram
sudo dd if=/dev/mem | cat | strings
A fun thing to do with ram is actually open it up and take a peek.
21) Display which distro is installed
cat /etc/issue
22) Easily search running processes (alias).
alias ‘ps?’=’ps ax | grep ‘
23) Create a script of the last executed command
echo “!!” > foo.sh
Sometimes commands are long, but useful, so it’s helpful to be able to make them permanent without having to retype them. An alternative could use the history command, and a cut/sed line that works on your platform.
history -1 | cut -c 7- > foo.sh
24) Extract tarball from internet without local saving
wget -qO – “http://www.tarball.com/tarball.gz” | tar zxvf –
25) Create a backdoor on a machine to allow remote connection to bash
nc -vv -l -p 1234 -e /bin/bash
This will launch a listener on the machine that will wait for a connection on port 1234. When you connect from a remote machine with something like :
nc 192.168.0.1 1234
You w
ill have console access to the machine through bash. (becareful with this one)
Original post: http://blog.urfix.com/25-sick-linux-commands/
A Comprehensive Guide to Sharing Your Data Across Multi-Booting Windows, Mac, and Linux PCs (by Whitson Gordon via lifehacker.com)
We’re platform agnostic at Lifehacker, which is why we love dual- and triple-booting our computers. Unfortunately sharing data between operating systems can be a huge headache. Here’s how to stay organized by keeping it all in one place.
There’s nothing more annoying than booting into OS X only to realize you need access to some files on your un-readable Linux partition; or Windows; or any combination thereof. The more operating systems we put on one computer, the more our data can get scattered around different partitions that we can’t read or write from other OSes. With the right drivers and a bit of organization, though, you can keep all your data in one central location, and read and write that data from any OS under the sun.
Of course, not everyone triple-boots their system, so I’ve divided this guide into easily scannable sections, so you can skip right to the sections that apply to your machine (i.e., if you don’t have OS X, you won’t need to know how to read HFS volumes, nor will you need any drivers for OS X).
Part One: Sharing Drives Between Operating Systems
One of the biggest roadblocks to making your data available through each OS are all the different filesystems each one uses. OS X uses HFS+ and can’t write to NTFS drives; Windows uses NTFS and ignores pretty much everything else, and Linux has support for nearly everything (albeit with some serious hassle caused by stingy UNIX permissions). Thus, before you do anything else, you’ll need to install the correct drivers in each OS for reading and writing to other filesystems. Here are the best choices we’ve found in each situation.
Note: while it’s very likely that your OS X partition is HFS+ and your Windows partition is NTFS, your Linux partition could be any number of filesystems. Unfortunately, Ext4 (which is becoming the new standard) still isn’t supported in most third-party Ext drivers. For the most part, the drivers in this guide will work with Ext3 and Ext2 formatted Linux drives only. If your drive is Ext4, you may have to clone your Linux partition, using an Ext3-formatted drive as the destination.
Accessing Mac and Linux Drives in Windows
Reading and writing to Linux drives is easy in Windows, but there aren’t any free, read/write drivers for Windows, so you’ll have to compromise somewhere. Here are your options.
For Mac Volumes
To install the Boot Camp drivers, just insert the Snow Leopard install disc into your Mac and install the drivers when prompted. If you’re on a Hackintosh, you won’t get this option, since the disc won’t recognize your computer as a Mac. To install the HFS drivers on a Hackintosh, you can use this installer instead.
Unfortunately, these drives are read-only. If you absolutely have to write to your HFS partition, the only way to do so is to spring for either Paragon’s $40 HFS+ for Windows 8 or Mediafour’s $50 MacDrive 8. It isn’t cheap, but sadly it’s the only read/write option currently available.
For Linux Volumes
Luckily, there is a relatively pain-free Ext2/Ext3 driver for Windows called Ext2Fsd. Just download it and install it like a normal Windows program. When you get to the “Select Additional Tasks” stage, check all the necessary boxes for your setup (I chose to check all three). Once you’re done, however, you’ll get this error message:
To fix it, navigate to Ext2Fsd’s install location (C:Program FilesExt2Fsd by default), right click on Ext2Mgr,exe, hit Properties, and check the “Run as Administrator” box under Compatibility. Then, double click on it to set up your drive. Double click on your Ext3 drive, click the Mount Points button, hit Add, and select a drive letter for your drive. I chose to create a permanent mount point for the drive so it’s always mounted. You can choose whatever you want at this stage. Once you’re done, you should be able to browse your Linux drive from Windows Explorer just as you would any other drive.
Accessing Windows and Linux Drives in Mac OS X
With the free, open-source utility MacFuse, you can enable support for Windows and Linux drives very easily in OS X. All it takes is a few simple installer packages. Before you install the drivers themselves, you’ll need to install MacFuse. Then, install either (or both) of the drivers below depending on your needs.
For Windows Volumes
While Mac OS X can read NTFS partitions out of the box, you can’t actually write to them. If you need both read and write support, you can install the NTFS-3G driver after installing MacFuse. Just head over to their homepage, download the software, and double-click on the package to install. When prompted, I chose to use UBLIO caching during the installation process, since my NTFS partition is on an internal drive and is unlikely to be unintentionally disconnected. When you reboot, you should have full write support.
Note that their homepage is a bit confusing—the people who work on NTFS-3G also develop a driver called Tuxera NTFS for Mac, which
is not what you want (unless you feel like paying $30 for slightly better performance, in which case go for it). Make sure you’re downloading “NTFS-3G for Mac OS X” before you install. You may have to scroll down the blog to find a post containing the latest download. It isn’t the most well-organized homepage.
For Linux Volumes
To get Ext3 and Ext2 support in OS X, just download the Fuse-ext2 driver from this Sourceforge page and install the package. When you reboot, you should have read access to your Linux drive.
While the drive does support reading and writing, it’s set as read-only by default. You can enable it by tweaking a configuration file, but I will note that while many have had success with this method in Snow Leopard, it keeps throwing me an error when I try to write to the drive, so your mileage may vary. To make OS X mount the drive as read/write, just navigate to /System/Library/Filesystems/
. Right-click on the fuse-ext2.fs
file and hit “Show Package Contents.” Then, drag fuse-ext2.util
to the desktop, right-click on it, and hit “Open With”, choosing TextEdit when prompted.
Use Cmd+F to find the line that says OPTIONS="auto_xattr,defer_permissions"
near the middle of the file. Add ,rw+
to that line inside the quotes, so it reads:
OPTIONS=auto_xattr,defer_permissions,rw+"
When you reboot, the drive should be mounted as read/write. Note once again that write support is a bit buggy in this driver, so just be wary.
Accessing Windows and Mac Drives in Linux
Most Linux distros come with full NTFS support built-in, as well as read support for HFS+. So, you only need to do anything extra in Linux if you want to write to Mac-formatted drives.
For Mac Volumes
By default, Mac OS X formats volumes in journaled HFS+ volumes. Journaling is a feature that improves data reliability, and unfortunately it makes HFS drives read-only in Linux. To disable journaling, just boot into OS X and fire up Disk Utility. Click on your HFS partition, hold the Option key, and click File in the menu bar. A new option to Disable Journaling will come up in the menu. Click that, and reboot into Linux. You should have read and write access to your HFS partition—however, the permissions on your Mac user’s home folder will prevent you from reading or writing those files. See Part Two below to fix that problem.
Part Two: Putting All Your Data in One Place
This part is optional, but I’ve found that using one home folder to store all my data (and linking to that home folder in the other two OSes) makes life a lot easier, especially since a few of the drivers listed above aren’t quite perfect. Plus, by putting all my data in one place, I can keep my music libraries synced together, pause torrent downloading in one OS and resume it in another, and so on.
First, pick which OSes home folder you want to use for this—I like to use OS X’s home folder—and follow the instructions below to use it across OSes. Depending on your needs, you may choose to store all your data in your Windows or Linux home folder. The best way to decide which to use is by which OSes you use the most—since I barely use Windows (and thus didn’t feel like paying $40 for a read/write driver), I used my OS X partition as my main data dump, since it’s easy for Linux to read and write to it. The main idea is to not use a partition that has bad write support in an OS you use often—so, if you’re a heavy OS X user, you wouldn’t want to put all your data on your Linux partition, since the OS X driver isn’t so great. Similarly, if you use Windows often, you wouldn’t want to put it all on your OS X partition (unless you want to pay $40 for MacDrive). Think about which partition would be most convenient for you and go with it—after all, you can always move your data later if you so choose.
Making Mac and Linux Home Folders Play Nicely with One Another
The great thing about OS X and Linux is that they are both UNIX-based operating systems, so they work pretty well together if you can get everything set up correctly. When you create a user in either operating system, it gives you a User ID number. OS X starts these numbers in the 500s, while Linux usually starts in the thousands. This is problematic because a different “user” owns your home folder in OS X than owns your home folder in Linux. As such, Linux will deny you access to your OS X home folder, since you don’t have the right permissions to access it.
There’s an easy fix, however—we just need to change our UID in one OS so that it matches the UID in the other. Unless you have a reason for choosing otherwise, we’re going to change our Linux UID to match our OS X one, since it’s a bit easier. By default, the first user in OS X has a UID of 501, but you can double check this by going into System Preferences in OS X, right-clicking on your user, and hitting Advanced Options. If your User ID is something different from 501, replace 501
with your other UID in the terminal commands below.
Boot into Linux (we’re using Ubuntu in this example) and fire up the Terminal. First, we’re going to add a temporary user, since we don’t want to edit a user that we’re currently logged into. So, run the following commands int he Terminal, hitting Enter after each one:
sudo useradd -d /home/tempuser -m -s /bin/bash -G admin tempusersudo passwd tempuser
Type in a new password for the temporary user when prompted. Reboot and login as tempuser. Then, open up the Terminal and type in the following commands, once again hitting enter after each one (and replacing yourusername
with your Linux user’s username):
sudo usermod --uid 501 yourusernamesudo chown -R 501:yourusername ~/
This will change your Linux user’s UID to 501 and fix your home folder permissions so that you still own them. Now, you should be able to read and write to both your Mac and Linux u
ser’s home folder, no matter what OS you’re logged into.
You may also want to fix your login screen, since by default Ubuntu won’t list users with a UID of less than 1000. To do this, just open a Terminal and run gksudo gedit /etc/login.defs
and search for UID_MIN
in the text file. Change that value from 1000 to 501, and when you reboot your user will be listed in the login screen.
Lastly, log back in as your normal user and run sudo userdel -r tempuser
to delete the temporary user we created earlier.
If you like, you can create symlinks in one of your home folders that point to your main home folder for quick access. For example, since I use my OS X home folder as my main data dump, my Linux home folder is mostly empty. So, I created symlinks in my Linux home folder for Documents, Videos, Pictures, etc. that point to the equivalent folders on my Mac partition. You can do this by using the following Terminal command:
ln -s /path/to/linked/folder /path/to/symlink/
If you’re using your Linux home folder as the main one, you can use this same command to create symlinks that link to your Linux home folder instead.
Note that if you’re using your Mac partition as the main home folder, you’ll probably also want to automatically mount it in Linux when you start up. You can do this by adding a line to the end of /etc/fstab
. This will vary from person to person, but mine looks like this:
/dev/sda3 /media/Data auto rw,user,auto 0 0
Where /dev/sda3
is the location of the partition containing the home folder and media/Data
is the path I want to use to navigate to it.
Using Libraries in Windows 7
Since Windows doesn’t support UNIX permissions, you won’t need to mess with them at all—you should be able to read and write to your Mac and Linux home folders without a problem (as long as you have the correct drivers installed). To make them easier to access, we can use Windows 7’s awesome Libraries feature, which allows your Documents, Videos, Pictures, and other “libraries” to link to multiple folders on your drive, so you can access the files stored in your main home folder from shortcuts in the Windows Explorer sidebar (and in many applications).
To add those folders to each library, open up Windows Explorer. Right click on a Library (say, Documents), and hit Properties. Hit the “Include a folder” button and navigate to the Documents folder in your main home folder. Hit Include, and you should see it show up in the list. You can even click on it and hit “Set Save Location” to set it as the default save location for the types of files Windows associates with that Library. Repeat this for your other libraries and you’re all set on the Windows front.
Now, I just make sure all my applications point to the same directories in each OS. For example, I have Amarok watching my iTunes folder for new files, so when I add music to my iTunes library, it will show up automatically in Amarok (similarly, I can add newly download music to iTunes’ “Add Automatically to iTunes” folder for it to automatically show up in both Amarok and iTunes). I tell my torrent downloader in each OS to download new torrents to the same location, so if I want to leave Linux and continue downloading a torrent in OS X, I can just reboot, add the torrent to Transmission’s queue, and it will pick up right where I left off in Linux. This way, you don’t need to use space-limited solutions like Dropbox (as awesome as they are for inter-computer syncing) to sync your data—it’s just always there. There are, of course, other ways to do this, but this is the way I have it set up. How do you share your data between multiple operating systems? Share your favorite strategies in the comments.
Send an email to Whitson Gordon, the author of this post, at whitson@lifehacker.com.
Original article: http://lifehacker.com/5702815/
How to Triple Boot Your Hackintosh with Windows and Linux (via lifehacker.com)
We’ve walked through how to triple-boot your Mac with Windows and Linux, but if you’re using a shiny new Hackintosh, the process is a bit more complicated. Here’s how to get all three operating systems up an running on your new PC.
While the Chameleon bootloader (the default boot screen for your Hackintosh) is a great friend to Hackintosh builders, Windows and Linux try to muck everything up by attempting to take over your computer with their own bootloaders, resetting the active partition, and throwing your partition tables out of sync. There are two ways to triple boot your Hackintosh. The first is very straightforward and allows you a lot of flexibility, while the second is much more complicated but offers other advantages depending on how many hard drives you have. This guide assumes you’ve already installed Mac OS X as described in our most recent Hackintosh guide, and, if you’re using the second method, that you still have the iBoot disc handy. You’ll also obviously need the Windows 7 and Linux installation discs as well. If you’ve got everything ready, follow the instructions below to get Windows 7 and Linux living harmoniously on the same PC.
The Easy Method: Use Multiple Hard Drives
By far the easiest way to triple boot your Hackintosh is to install your other operating systems to separate hard drives. Chameleon can see operating systems on any hard drive in your computer, and one of the advantages of building a desktop is that you have tons of extra drive bays to fill up. Chances are you probably have some extra drives lying around anyways, so this wouldn’t be too out of the way. This method doesn’t even warrant a how-to—you just install your extra drives in your system, then install Windows and Linux on each one using the default settings. You can even stick them Linux and Windows on the same drive, if you want—it’s only when all three get together that you start to have problems. Photo by Justin Ruckman.If, for some reason, you want to keep them all on the same drive, roll up your sleeves and read on.
The Complicated Method: One Drive to Boot Them All
Putting all three OSes on one drive isn’t difficult, but you do need to perform all the steps correctly and in the right order, or you’ll be left with a confused mess on your machine. The only big advantage of this method is if you don’t have any extra hard drives lying around, or if you have a large enough SSD and want to take advantage of its speed in all three OSes.
Step One: Partition Your Drive
Right now, you should have a drive with just one partition containing Snow Leopard (plus your 200MB EFI partition, which won’t be visible in Disk Utility). Start up Disk Utility and click on the drive containing OS X in the left sidebar. Head over to the Partition tab, and click on your Mac OS X partition. Hit the plus sign at the bottom of the window twice, so you have a total of three partitions. Head to the upper right-hand corner of the window and name the second partition WINDOWS and the third one LINUX, formatting them both as FAT32 for now. If you need swap space for Linux, you can add a fourth partition, but nowadays this seems pretty unnecessary, so three partitions should be just fine. Hit the Apply button and let it work its magic.
When you’re done, insert your Windows 7 installation disc and restart your computer.
Step Two: Install Windows 7
Boot from the Windows 7 disc and head into the Windows installation. Make sure you do a Custom install, and when you’re given a list of hard drives, click on the partition named WINDOWS and hit “Drive Options (Advanced)”. Click Format to format the drive as NTFS, and then hit Next to start the installation. Your computer will reboot a few times, but you won’t have to mess with it at all, so go away and come back when it prompts you to name your computer.
As always, Windows is the biggest problem child in this debacle. When you reboot, you won’t be able to boot into OS X, but that’s fine—we’ll deal with all that in a moment. First, we’re going to get this Linux installation out of the way.
Step Three: Install Linux
For the purposes of this guide, we’re going to install Ubuntu 10.04, but you can use another version of Ubuntu if you want, or another distro altogether (like the super awesome Arch Linux). Just make sure you install Linux to the correct partition and make extra sure that you install Grub to the same partition to which you installed Linux, as described below.
Boot up from your Ubuntu CD and head into the installation. The first few steps are pretty self-explanatory, it’s when you get to the partition window that you want to pay attention. Hit “Specify Partitions Manually” and click Next. Double click your Linux partition’s entry in the table (at this point, it should be the only FAT32 formatted partition on your drive). Under “Use As”, choose your desired filesystem (If you aren’t sure, use Ext4, which seems to be the new standard). Check the “Format the Partition” box and choose
/
as the Mount Point. Hit OK. Before moving on, note the name of your Linux partition—the name will be something like/dev/sda4
—and hit the Forward button to continue.In the last window, where it says “Ready to Ins
tall”, hit the Advanced button. Under “Device for boot loader installation”, it should say something like/dev/sda
. Change this to/dev/sda4
, or whatever the name of your Linux partition is. Ordinarily, Grub will install itself to the Master Boot Record of the drive, because it wants to be your primary bootloader. In this case, we’re already using Chameleon, so we’re just going to stick this on Linux’s partition, since we won’t be using it to get into Windows or OS X. When you’re ready, hit the Install button and let Ubuntu do its thing. When you’re done, restart your computer.Step Four: Fix the Windows Bootloader You Just Broke
You’d think keeping Grub away from Windows would leave Windows’ bootloader untouched, but these operating systems just don’t like to play nicely together. Unfortunately, when you first installed Mac OS X, you set your hard drive to use a GUID partition table (GPT), which is not fully compatible with Windows (Windows and Grub really prefer an MBR partition table). Now that you’ve installed Mac OS X, Windows, and Linux side-by-side, your drive is a GPT/MBR hybrid, and your partition tables are “out of sync”. To make the GPT and MBR tables play nicely with one another on the same drive, you need to sync them with a program called
gptsync
in Linux.So, grab your iBoot CD and use it to boot into your new Linux partition (since Chameleon is strangely missing—we’ll get to that in a second). Download
gptsync
from your distro’s repositories (though Ubuntu users may want to use the .deb files available here instead of the older versions still in the repositories). Once it’s installed, pull up a Terminal window and type:gptsync /dev/sdawhere
/dev/sda
is the drive containing all your partitions. If you aren’t sure which one is the one you’re using, type infdisk -l
to see a list. Note that you aren’t using it on just one of the partitions (e.g./dev/sda1
), you’re using it on the entire drive. Once you’re done, your computer should successfully boot into Windows whenever you reboot.Step Five: Set The OS X Partition as Active
When Windows installs, it makes itself the active partition on your computer, which means when you restart, your computer will just boot you into Windows as if OS X and Linux weren’t even there. We want the active partition to be our OS X partition, since it contains Chameleon, which lets us choose between the OSes when we start off. To pry Windows’ greedy hands off your hard drive, you’ll have to boot up from the iBoot CD into OS X and open up Terminal.
Type
diskutil list
and hit enter to see a list of your drives and their partitions. Note the identifier of your OS X partition (which will be labeled asApple_HFS Snow Leopard
). This should be something likedisk0s2
. Typesudo -s
and enter your password to gain root permissions.Next, type in
fdisk -u /dev/rdisk0
and hit enter, whererdisk0
corresponds to the first number in your OS X partition’s identifier (for example, if its identifier weredisk1s2
instead ofdisk0s2
, you would type/dev/rdisk1
instead ofrdisk0
). Hity
to continue.Then, type in
frisk -e /dev/rdisk0
, where once again,rdisk0
corresponds to the correct partition. Type inp
and hit enter, then typef 2
, where2
corresponds to the second number in your partition’s identifier (e.g.disk0s2
). Hit enter. Enterw
at the next prompt, theny
to complete the process. Close the Terminal and reboot your computer.
If everything goes well, you should be greeted once again by the familiar Chameleon bootloader, which will now list Mac OS X, Windows, and Linux as available boot options. Double check and make sure each of them boots correctly. If they do, you’re finished! Enjoy your new triple-booting PC. If not, you may have done something wrong in the above steps. You can try googling any error codes you get and fixing it that way, or re-syncing the partition tables and trying again, but because of the complications in Windows and the GUID partition table, it might be simplest to just start from scratch. Back up your data in your OS X partition, reformat the entire drive, and start over. It’s a pain, but like we said before—these three OSes really, really don’t like to get along with one another when you try to put them all on the same drive.
If the pain of starting from scratch is too much to bear, reconsider the multiple-drive option—it won’t give you the speed boosts of an SSD (unless you buy three), and it might cost a bit more if you don’t already have drives lying around, but on the occasion that you need to reinstall one of the OSes or reformat part of your drive, it will be completely hassle-free, unlike the above method which has me pulling my hair out after just one day.
Lastly, as always, these may not be the only ways to triple boot your Hackintosh, but it’s the method that, after a few tries, I’ve found works pretty well. So, if you have your own preferred method (or tips for others trying this one), share them with us in the comments.
Send an email to Whitson Gordon, the author of this post, at whitson@lifehacker.com.
Original post: http://lifehacker.com/5698205/