This is a classic article written by Jack Wallen from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

Many years ago, when I first began with Linux, installing applications and keeping a system up to date was not an easy feat. In fact, if you wanted to tackle either task you were bound for the command line. For some new users this left their machines outdated or without applications they needed. Of course, at the time, most everyone trying their hand at Linux knew they were getting into something that would require some work. That was simply the way it was. Fortunately times and Linux have changed. Now Linux is exponentially more user friendly – to the point where so much is automatic and point and click – that today’s Linux hardly resembles yesterday’s Linux.

But even though Linux has evolved into the user-friendly operating system it is, there are still some systems that are fundamentally different than their Windows counterparts. So it is always best to understand those systems in order to be able to properly use those system. Within the confines of this article you will learn how to keep your Linux system up to date. In the process you might also learn how to install an application or two.

There is one thing to understand about updating Linux: Not every distribution handles this process in the same fashion. In fact, some distributions are distinctly different down to the type of file types they use for package management.

  • Ubuntu and Debian use .deb
  • Fedora, SuSE, and Mandriva use .rpm
  • Slackware uses .tgz archives which contain pre-built binaries
  • And of course there is also installing from source or pre-compiled .bin or .package files.
As you can see there are number of possible systems (and the above list is not even close to being all-inclusive). So to make the task of covering this topic less epic, I will cover the Ubuntu and Fedora systems. I will touch on both the GUI as well as the command line tools for handling system updates.

Ubuntu Linux

Ubuntu Linux has become one of the most popular of all the Linux distributions. And through the process of updating a system, you should be able to tell exactly why this is the case. Ubuntu is very user friendly. Ubuntu uses two different tools for system update:
  • apt-get: Command line tool.
  • Update Manager: GUI tool.
Ubuntu Update ManagerThe Update Manger is a nearly 100% automatic tool. With this tool you will not have to routinely check to see if there are updates available. Instead you will know updates are available because the Update Manager will open on your desktop (see Figure 1) as soon as the updates depending upon their type:
  • Security updates: Daily
  • Non-security updates: Weekly

If you want to manually check for updates, you can do this by clicking the Administration sub-menu of the System menu and then selecting the Update Manager entry. When the Update Manager opens click the Check button to see if there are updates available.

Figure 1 shows a listing of updates for a Ubuntu 9.10 installation. As you can see there are both Important Security Updates as well as Recommended Update. If you want to get information about a particular update you can select the update and then click on the Description of update dropdown.
In order to update the packages follow these steps:
  1. Check the updates you want to install. By default all updates are selected.
  2. Click the Install Updates button.
  3. Enter your user (sudo) password.
  4. Click OK.

The updates will proceed and you can continue on with  your work. Now some updates may require either you to log out of your desktop and log back in, or to reboot the machine. There are is a new tool in development (Ksplice) that allow even the update of a kernel to not require a reboot.
Once all of the updates are complete the Update Manage main window will return reporting that Your system is up to date.

Updating via command lineNow let’s take a look at the command line tools for updating your system. The Ubuntu package management system is called apt. Apt is a very powerful tool that can completely manage your systems packages via command line. Using the command line tool has one drawback – in order to check to see if you have updates, you have to run it manually. Let’s take a look at how to update your system with the help of Apt. Follow these steps:

  1. Open up a terminal window.
  2. Issue the command sudo apt-get upgrade.
  3. Enter your user’s password.
  4. Look over the list of available updates (see Figure 2) and decide if you want to go through with the entire upgrade.
  5. To accept all updates click the ‘y’ key (no quotes) and hit Enter.
  6. Watch as the update happens.

That’s it. Your system is now up to date. Let’s take a look at how the same process happens on Fedora (Fedora 12 to be exact).

Fedora Linux

Fedora is a direct descendant of Red Hat Linux, so it is the beneficiary of the Red Hat Package Management system (rpm). Like Ubuntu, Fedora can be upgraded by:
  • yum: Command line tool.
  • GNOME (or KDE) PackageKit: GUI tool.

GNOME PackageKitDepending upon your desktop, you will either use the GNOME or the KDE front-end for PackageKit. In order to open up this tool you simply go to the Administration sub-menu of the System menu and select the Software Update entry.  When the tool opens (see Figure 3) you will see the list of updates. To get information about a particular update all you need to do is to select a specific package and the information will be displayed in the bottom pane.

To go ahead with the update click the Install Updates button. As the process happens a progress bar will indicate where GNOME (or KDE) PackageKit is in the steps. The steps are:

  1. Resolving dependencies.
  2. Downloading packages.
  3. Testing changes.
  4. Installing updates.

When the process is complete, GNOME (or KDE) PackageKit will report that your system is update. Click the OK button when prompted.

Now let’s take a look at upgrading Fedora via the command line. As stated earlier, this is done with the help of the yum command. In order to take care of this, follow these steps:

Updating with the help of yum

  1. Open up a terminal window (Do this by going to the System Tools sub-menu of the Applications menu and select Terminal).
  2. Enter the su command to change to the super user.
  3. Type your super user password and hit Enter.
  4. Issue the command yum update and yum will check to see what packages are available for update.
  5. Look through the listing of updates (see Figure 4).
  6. If you want to go through with the update enter ‘y’ (no quotes) and hit Enter.
  7. Sit back and watch the updates happen.
  8. Exit out of the root user command prompt by typing “exit” (no quotes) and hitting Enter.
  9. Close the terminal when complete.

Your Fedora system is now up to date.

Final Thoughts

Granted only two distributions were touched on here, but this should illustrate how easily a Linux installation is updated. Although the tools might not be universal, the concepts are. Whether you are using Ubuntu, OpenSuSE, Slackware, Fedora, Mandriva, or anything in-between, the above illustrations should help you through updating just about any Linux distribution. And hopefully this tutorial helps to show you just how user-friendly the Linux operating system has become.

Ready to continue your Linux journey? Check out our free intro to Linux course!

This is a classic article written by Joe “Zonker” Brockmeier from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

The first step is often the hardest, but don’t let that stop you. If you’ve ever wanted to learn how to write a shell script but didn’t know where to start, this is your lucky day.

If this is your first time writing a script, don’t worry — shell scripting is not that complicated. That is, you can do some complicated things with shell scripts, but you can get there over time. If you know how to run commands at the command line, you can learn to write simple scripts in just 10 minutes. All you need is a text editor and an idea of what you want to do. Start small and use scripts to automate small tasks. Over time you can build on what you know and wind up doing more and more with scripts.

Starting Off

Each script starts with a “shebang” and the path to the shell that you want the script to use, like so:

#!/bin/bash

The “#!” combo is called a shebang by most Unix geeks. This is used by the shell to decide which interpreter to run the rest of the script, and ignored by the shell that actually runs the script. Confused? Scripts can be written for all kinds of interpreters — bash, tsch, zsh, or other shells, or for Perl, Python, and so on. You could even omit that line if you wanted to run the script by sourcing it at the shell, but let’s save ourselves some trouble and add it to allow scripts to be run non-interactively.

What’s next? You might want to include a comment or two about what the script is for. Preface comments with the hash (#) character:

#!/bin/bash
# A simple script

Let’s say you want to run an rsync command from the script, rather than typing it each time. Just add the rsync command to the script that you want to use:

#!/bin/bash
# rsync script
rsync -avh --exclude="*.bak" /home/user/Documents/ /media/diskid/user_backup/Documents/

Save your file, and then make sure that it’s set executable. You can do this using the chmod utility, which changes a file’s mode. To set it so that a script is executable by you and not the rest of the users on a system, use “chmod 700 scriptname” — this will let you read, write, and execute (run) the script — but only your user. To see the results, run ls -lh scriptname and you’ll see something like this:

-rwx------ 1 jzb jzb   21 2010-02-01 03:08 echo

The first column of rights, rwx, shows that the owner of the file (jzb) has read, write, and execute permissions. The other columns with a dash show that other users have no rights for that file at all.

Variables

The above script is useful, but it has hard-coded paths. That might not be a problem, but if you want to write longer scripts that reference paths often, you probably want to utilize variables. Here’s a quick sample:

#!/bin/bash
# rsync using variables

SOURCEDIR=/home/user/Documents/
DESTDIR=/media/diskid/user_backup/Documents/

rsync -avh --exclude="*.bak" $SOURCEDIR $DESTDIR

There’s not a lot of benefit if you only reference the directories once, but if they’re used multiple times, it’s much easier to change them in one location than changing them throughout a script.

Taking Input

Non-interactive scripts are useful, but what if you need to give the script new information each time it’s run? For instance, what if you want to write a script to modify a file? One thing you can do is take an argument from the command line. So, for instance, when you run “script foo” the script will take the name of the first argument (foo):

#!/bin/bash

echo $1

Here bash will read the command line and echo (print) the first argument — that is, the first string after the command itself.

You can also use read to accept user input. Let’s say you want to prompt a user for input:

#!/bin/bash

echo -e "Please enter your name: "
read name
echo "Nice to meet you $name"

That script will wait for the user to type in their name (or any other input, for that matter) and use it as the variable $name. Pretty simple, yeah? Let’s put all this together into a script that might be useful. Let’s say you want to have a script that will back up a directory you specify on the command line to a remote host:

#!/bin/bash

echo -e "What directory would you like to back up?" 
read directory

DESTDIR=
 This e-mail address is being protected from spambots. You need JavaScript enabled to view it
 :$directory/

rsync --progress -avze ssh --exclude="*.iso" $directory $DESTDIR

That script will read in the input from the command line and substitute it as the destination directory at the target system, as well as the local directory that will be synced. It might look a bit complex as a final script, but each of the bits that you need to know to put it together are pretty simple. A little trial and error and you’ll be creating useful scripts of your own.

Of course, this is just scraping the surface of bash scripting. If you’re interested in learning more, be sure to check out the Bash Guide for Beginners.

Ready to continue your Linux journey? Check out our free intro to Linux course!

This is a classic article written by Jack Wallen from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

At some point in your career as a Linux administrator, you are going to have to view log files. After all, they are there for one very important reason…to help you troubleshoot an issue. In fact, every seasoned administrator will immediately tell you that the first thing to be done, when a problem arises, is to view the logs.

And there are plenty of logs to be found: logs for the system, logs for the kernel, for package managers, for Xorg, for the boot process, for Apache, for MySQL… For nearly anything you can think of, there is a log file.

Most log files can be found in one convenient location: /var/log. These are all system and service logs, those which you will lean on heavily when there is an issue with your operating system or one of the major services. For desktop app-specific issues, log files will be written to different locations (e.g., Thunderbird writes crash reports to ‘~/.thunderbird/Crash Reports’). Where a desktop application will write logs will depend upon the developer and if the app allows for custom log configuration.

We are going to be focus on system logs, as that is where the heart of Linux troubleshooting lies. And the key issue here is, how do you view those log files?

Fortunately there are numerous ways in which you can view your system logs, all quite simply executed from the command line.

/var/log

This is such a crucial folder on your Linux systems. Open up a terminal window and issue the command cd /var/log. Now issue the command ls and you will see the logs housed within this directory (Figure 1).

Figure 1: A listing of log files found in /var/log/.

Now, let’s take a peek into one of those logs.

Viewing logs with less

One of the most important logs contained within /var/log is syslog. This particular log file logs everything except auth-related messages. Say you want to view the contents of that particular log file. To do that, you could quickly issue the command less /var/log/syslog. This command will open the syslog log file to the top. You can then use the arrow keys to scroll down one line at a time, the spacebar to scroll down one page at a time, or the mouse wheel to easily scroll through the file.

The one problem with this method is that syslog can grow fairly large; and, considering what you’re looking for will most likely be at or near the bottom, you might not want to spend the time scrolling line or page at a time to reach that end. Will syslog open in the less command, you could also hit the [Shift]+[g] combination to immediately go to the end of the log file. The end will be denoted by (END). You can then scroll up with the arrow keys or the scroll wheel to find exactly what you want.

This, of course, isn’t terribly efficient.

Viewing logs with dmesg

The dmesg command prints the kernel ring buffer. By default, the command will display all messages from the kernel ring buffer. From the terminal window, issue the command dmesg and the entire kernel ring buffer will print out (Figure 2).

Figure 2: A USB external drive displaying an issue that may need to be explored.

Fortunately, there is a built-in control mechanism that allows you to print out only certain facilities (such as daemon).

Say you want to view log entries for the user facility. To do this, issue the command dmesg –facility=user. If anything has been logged to that facility, it will print out.

Unlike the less command, issuing dmesg will display the full contents of the log and send you to the end of the file. You can always use your scroll wheel to browse through the buffer of your terminal window (if applicable). Instead, you’ll want to pipe the output of dmesg to the less command like so:

dmesg | less

The above command will print out the contents of dmesg and allow you to scroll through the output just as you did viewing a standard log with the less command.

Viewing logs with tail

The tail command is probably one of the single most handy tools you have at your disposal for the viewing of log files. What tail does is output the last part of files. So, if you issue the command tail /var/log/syslog, it will print out only the last few lines of the syslog file.

But wait, the fun doesn’t end there. The tail command has a very important trick up its sleeve, by way of the -f option. When you issue the command tail -f /var/log/syslog, tail will continue watching the log file and print out the next line written to the file. This means you can follow what is written to syslog, as it happens, within your terminal window (Figure 3).
Figure 3: Following /var/log/syslog using the tail command.

Using tail in this manner is invaluable for troubleshooting issues.

To escape the tail command (when following a file), hit the [Ctrl]+[x] combination.

You can also instruct tail to only follow a specific amount of lines. Say you only want to view the last five lines written to syslog; for that you could issue the command:

tail -f -n 5 /var/log/syslog

The above command would follow input to syslog and only print out the most recent five lines. As soon as a new line is written to syslog, it would remove the oldest from the top. This is a great way to make the process of following a log file even easier. I strongly recommend not using this to view anything less than four or five lines, as you’ll wind up getting input cut off and won’t get the full details of the entry.

There are other tools

You’ll find plenty of other commands (and even a few decent GUI tools) to enable the viewing of log files. Look to more, grep, head, cat, multitail, and System Log Viewer to aid you in your quest to troubleshooting systems via log files.

Advance your career with Linux system administration skills. Check out the Essentials of System Administration course from The Linux Foundation.

This is a classic article written by Carla Schroder from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

grub command shell

Once upon a time we had legacy GRUB, the Grand Unified Linux Bootloader version 0.97. Legacy GRUB had many virtues, but it became old and its developers did yearn for more functionality, and thus did GRUB 2 come into the world.

GRUB 2 is a major rewrite with several significant differences. It boots removable media, and can be configured with an option to enter your system BIOS. It’s more complicated to configure with all kinds of scripts to wade through, and instead of having a nice fairly simple /boot/grub/menu.lst file with all configurations in one place, the default is /boot/grub/grub.cfg. Which you don’t edit directly, oh no, for this is not for mere humans to touch, but only other scripts. We lowly humans may edit /etc/default/grub, which controls mainly the appearance of the GRUB menu. We may also edit the scripts in /etc/grub.d/. These are the scripts that boot your operating systems, control external applications such as memtest and os_prober, and theming./boot/grub/grub.cfg is built from /etc/default/grub and /etc/grub.d/* when you run the update-grub command, which you must run every time you make changes.

The good news is that the update-grub script is reliable for finding kernels, boot files, and adding all operating systems to your GRUB boot menu, so you don’t have to do it manually.

We’re going to learn how to fix two of the more common failures. When you boot up your system and it stops at the grub> prompt, that is the full GRUB 2 command shell. That means GRUB 2 started normally and loaded the normal.mod module (and other modules which are located in /boot/grub/[arch]/), but it didn’t find your grub.cfg file. If you see grub rescue> that means it couldn’t find normal.mod, so it probably couldn’t find any of your boot files.

How does this happen? The kernel might have changed drive assignments or you moved your hard drives, you changed some partitions, or installed a new operating system and moved things around. In these scenarios your boot files are still there, but GRUB can’t find them. So you can look for your boot files at the GRUB prompt, set their locations, and then boot your system and fix your GRUB configuration.

GRUB 2 Command Shell

The GRUB 2 command shell is just as powerful as the shell in legacy GRUB. You can use it to discover boot images, kernels, and root filesystems. In fact, it gives you complete access to all filesystems on the local machine regardless of permissions or other protections. Which some might consider a security hole, but you know the old Unix dictum: whoever has physical access to the machine owns it.

When you’re at the grub> prompt, you have a lot of functionality similar to any command shell such as history and tab-completion. The grub rescue> mode is more limited, with no history and no tab-completion.

If you are practicing on a functioning system, press C when your GRUB boot menu appears to open the GRUB command shell. You can stop the bootup countdown by scrolling up and down your menu entries with the arrow keys. It is safe to experiment at the GRUB command line because nothing you do there is permanent. If you are already staring at the grub> or grub rescue>prompt then you’re ready to rock.

The next few commands work with both grub> and grub rescue>. The first command you should run invokes the pager, for paging long command outputs:

grub> set pager=1

There must be no spaces on either side of the equals sign. Now let’s do a little exploring. Type ls to list all partitions that GRUB sees:

grub> ls
(hd0) (hd0,msdos2) (hd0,msdos1)

What’s all this msdos stuff? That means this system has the old-style MS-DOS partition table, rather than the shiny new Globally Unique Identifiers partition table (GPT). (See Using the New GUID Partition Table in Linux (Goodbye Ancient MBR). If you’re running GPT it will say (hd0,gpt1). Now let’s snoop. Use the ls command to see what files are on your system:

grub> ls (hd0,1)/
lost+found/ bin/ boot/ cdrom/ dev/ etc/ home/  lib/
lib64/ media/ mnt/ opt/ proc/ root/ run/ sbin/ 
srv/ sys/ tmp/ usr/ var/ vmlinuz vmlinuz.old 
initrd.img initrd.img.old

Hurrah, we have found the root filesystem. You can omit the msdos and gpt labels. If you leave off the slash it will print information about the partition. You can read any file on the system with the cat command:

grub> cat (hd0,1)/etc/issue
Ubuntu 14.04 LTS n l

Reading /etc/issue could be useful on a multi-boot system for identifying your various Linuxes.

Booting From grub>

This is how to set the boot files and boot the system from the grub> prompt. We know from running the ls command that there is a Linux root filesystem on (hd0,1), and you can keep searching until you verify where /boot/grub is. Then run these commands, using your own root partition, kernel, and initrd image:

grub> set root=(hd0,1)
grub> linux /boot/vmlinuz-3.13.0-29-generic root=/dev/sda1
grub> initrd /boot/initrd.img-3.13.0-29-generic
grub> boot

The first line sets the partition that the root filesystem is on. The second line tells GRUB the location of the kernel you want to use. Start typing /boot/vmli, and then use tab-completion to fill in the rest. Type root=/dev/sdX to set the location of the root filesystem. Yes, this seems redundant, but if you leave this out you’ll get a kernel panic. How do you know the correct partition? hd0,1 = /dev/sda1. hd1,1 = /dev/sdb1. hd3,2 = /dev/sdd2. I think you can extrapolate the rest.

The third line sets the initrd file, which must be the same version number as the kernel.

The fourth line boots your system.

On some Linux systems the current kernels and initrds are symlinked into the top level of the root filesystem:

$ ls -l /
vmlinuz -> boot/vmlinuz-3.13.0-29-generic
initrd.img -> boot/initrd.img-3.13.0-29-generic

So you could boot from grub> like this:

grub> set root=(hd0,1)
grub> linux /vmlinuz root=/dev/sda1
grub> initrd /initrd.img
grub> boot

Booting From grub-rescue>

If you’re in the GRUB rescue shell the commands are different, and you have to load the normal.mod andlinux.mod modules:

grub rescue> set prefix=(hd0,1)/boot/grub
grub rescue> set root=(hd0,1)
grub rescue> insmod normal
grub rescue> normal
grub rescue> insmod linux
grub rescue> linux /boot/vmlinuz-3.13.0-29-generic root=/dev/sda1
grub rescue> initrd /boot/initrd.img-3.13.0-29-generic
grub rescue> boot

Tab-completion should start working after you load both modules.

Making Permanent Repairs

When you have successfully booted your system, run these commands to fix GRUB permanently:

# update-grub
Generating grub configuration file ...
Found background: /usr/share/images/grub/Apollo_17_The_Last_Moon_Shot_Edit1.tga
Found background image: /usr/share/images/grub/Apollo_17_The_Last_Moon_Shot_Edit1.tga
Found linux image: /boot/vmlinuz-3.13.0-29-generic
Found initrd image: /boot/initrd.img-3.13.0-29-generic
Found linux image: /boot/vmlinuz-3.13.0-27-generic
Found initrd image: /boot/initrd.img-3.13.0-27-generic
Found linux image: /boot/vmlinuz-3.13.0-24-generic
Found initrd image: /boot/initrd.img-3.13.0-24-generic
Found memtest86+ image: /boot/memtest86+.elf
Found memtest86+ image: /boot/memtest86+.bin
done
# grub-install /dev/sda
Installing for i386-pc platform.
Installation finished. No error reported.

When you run grub-install remember you’re installing it to the boot sector of your hard drive and not to a partition, so do not use a partition number like /dev/sda1.

But It Still Doesn’t Work

If your system is so messed up that none of this works, try the Super GRUB2 live rescue disk. The official GNU GRUB Manual 2.00 should also be helpful.

Advance your career with Linux system administration skills. Check out the Essentials of System Administration course from The Linux Foundation.

This is a classic article written by Swapnil Bhartiya from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

If you run a live or home server, moving files between local machines or two remote machines is a basic requirement. There are many ways to achieve that. In this article, we talk about scp (secure copy command) that encrypts the transferred file and password so no one can snoop. With scp you don’t have to start an FTP session or log into the system.

The scp tool relies on SSH (Secure Shell) to transfer files, so all you need is the username and password for the source and target systems. Another advantage is that with SCP you can move files between two remote servers, from your local machine in addition to transferring data between local and remote machines. In that case you need usernames and passwords for both servers. Unlike Rsync, you don’t have to log into any of the servers to transfer data from one machine to another.

This tutorial is aimed at new Linux users, so I will keep things as simple as possible. Let’s get started.

Copy a single file from the local machine to a remote machine:

The scp command needs a source and destination to copy files from one location to another location. This is the pattern that we use:

scp localmachine/path_to_the_file username@server_ip:/path_to_remote_directory

In the following example I am copying a local file from my macOS system to my Linux server (Mac OS, being a UNIX operating system has native support for all UNIX/Linux tools).

scp /Volumes/MacDrive/Distros/fedora.iso 
swapnil@10.0.0.75:/media/prim_5/media_server/

Here, ‘swapnil’ is the user on the server and 10.0.0.75 is the server IP. It will ask you to provide the password for that user, and then copy the file securely.

I can do the same from my local Linux machine:

scp /home/swapnil/Downloads/fedora.iso swapnil@10.0.0.75:/media/prim_5/media_server/

If you are running Windows 10, then you can use Ubuntu bash on Windows to copy files from the Windows system to Linux server:

scp /mnt/c/Users/swapnil/Downloads/fedora.iso swapnil@10.0.0.75:/media/prim_5/
  media_server/

Copy a local directory to a remote server:

If you want to copy the entire local directory to the server, then you can add the -r flag to the command:

scp -r localmachine/path_to_the_directory username@server_ip:/path_to_remote_directory/

Make sure that the source directory doesn’t have a forward slash at the end of the path, at the same time the destination path *must* have a forward slash.

Copy all files in a local directory to a remote directory

What if you only want to copy all the files inside a local directory to a remote directory? It’s simply, just add a forward slash and * at the end of source directory and give the path of destination directory. Don’t forget to add the -r flag to the command:

scp -r localmachine/path_to_the_directory/* username@server_ip:/path_to_remote_directory/

Copying files from remote server to local machine

If you want to make a copy of a single file, a directory or all files on the server to the local machine, just follow the same example above, just exchange the place of source and destination.

Copy a single file:

scp username@server_ip:/path_to_remote_directory local_machine/path_to_the_file

Copy a remote directory to a local machine:

scp -r username@server_ip:/path_to_remote_directory local-machine/path_to_the_directory/

Make sure that the source directory doesn’t have a forward slash at the end of the path, at the same time the destination path *must* have a forward slash.

Copy all files in a remote directory to a local directory:

scp -r username@server_ip:/path_to_remote_directory/* local-machine/path_to_the_directory/

Copy files from one directory of the same server to another directory securely from local machine

Usually I ssh into that machine and then use rsync command to perform the job, but with SCP, I can do it easily without having to log into the remote server.

Copy a single file:

scp username@server_ip:/path_to_the_remote_file username@server_ip:/
  path_to_destination_directory/

Copy a directory from one location on remote server to different location on the same server:

scp username@server_ip:/path_to_the_remote_file username@server_ip:/
  path_to_destination_directory/

Copy all files in a remote directory to a local directory

scp -r username@server_ip:/path_to_source_directory/* usdername@server_ip:/
  path_to_the_destination_directory/

Copy files from one remote server to another remote server from a local machine

Currently I have to ssh into one server in order to use rsync command to copy files to another server. I can use SCP command to move files between two remote servers:

Usually I ssh into that machine and then use rsync command to perform the job, but with SCP, I can do it easily without having to log into the remote server.

Copy a single file:

scp username@server1_ip:/path_to_the_remote_file username@server2_ip:/
  path_to_destination_directory/

Copy a directory from one location on a remote server to different location on the same server:

scp username@server1_ip:/path_to_the_remote_file username@server2_ip:/
  path_to_destination_directory/

Copy all files in a remote directory to a local directory

scp -r username@server1_ip:/path_to_source_directory/* username@server2_ip:/
  path_to_the_destination_directory/

Conclusion

As you can see, once you understand how things work, it will be quite easy to move your files around. That’s what Linux is all about, just invest your time in understanding some basics, then it’s a breeze!

Ready to continue your Linux journey? Check out our free intro to Linux course!

This is a classic article written by Jack Wallen from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

Quick question: How much space do you have left on your drives? A little or a lot? Follow up question: Do you know how to find out? If you happen to use a GUI desktop (e.g., GNOME, KDE, Mate, Pantheon, etc.), the task is probably pretty simple. But what if you’re looking at a headless server, with no GUI? Do you need to install tools for the task? The answer is a resounding no. All the necessary bits are already in place to help you find out exactly how much space remains on your drives. In fact, you have two very easy-to-use options at the ready.

In this article, I’ll demonstrate these tools. I’ll be using Elementary OS, which also includes a GUI option, but we’re going to limit ourselves to the command line. The good news is these command-line tools are readily available for every Linux distribution. On my testing system, there are a number of attached drives (both internal and external). The commands used are agnostic to where a drive is plugged in; they only care that the drive is mounted and visible to the operating system.

With that said, let’s take a look at the tools.

df

The df command is the tool I first used to discover drive space on Linux, way back in the 1990s. It’s very simple in both usage and reporting. To this day, df is my go-to command for this task. This command has a few switches but, for basic reporting, you really only need one. That command is df -H. The -H switch is for human-readable format. The output of df -H will report how much space is used, available, percentage used, and the mount point of every disk attached to your system (Figure 1).

Figure 1: The output of df -H on my Elementary OS system.

What if your list of drives is exceedingly long and you just want to view the space used on a single drive? With df, that is possible. Let’s take a look at how much space has been used up on our primary drive, located at /dev/sda1. To do that, issue the command:

df -H /dev/sda1

The output will be limited to that one drive (Figure 2).

Figure 2: How much space is on one particular drive?

You can also limit the reported fields shown in the df output. Available fields are:

  • source — the file system source

  • size — total number of blocks

  • used — spaced used on a drive

  • avail — space available on a drive

  • pcent — percent of used space, divided by total size

  • target — mount point of a drive

Let’s display the output of all our drives, showing only the size, used, and avail (or availability) fields. The command for this would be:

df -H --output=size,used,avail

The output of this command is quite easy to read (Figure 3).

Figure 3: Specifying what output to display for our drives.

The only caveat here is that we don’t know the source of the output, so we’d want to include source like so:

df -H --output=source,size,used,avail

Now the output makes more sense (Figure 4).

Figure 4: We now know the source of our disk usage.

du

Our next command is du. As you might expect, that stands for disk usage. The du command is quite different to the df command, in that it reports on directories and not drives. Because of this, you’ll want to know the names of directories to be checked. Let’s say I have a directory containing virtual machine files on my machine. That directory is /media/jack/HALEY/VIRTUALBOX. If I want to find out how much space is used by that particular directory, I’d issue the command:

du -h /media/jack/HALEY/VIRTUALBOX

The output of the above command will display the size of every file in the directory (Figure 5).

Figure 5: The output of the du command on a specific directory.

So far, this command isn’t all that helpful. What if we want to know the total usage of a particular directory? Fortunately, du can handle that task. On the same directory, the command would be:

du -sh /media/jack/HALEY/VIRTUALBOX/

Now we know how much total space the files are using up in that directory (Figure 6).

Figure 6: My virtual machine files are using 559GB of space.

You can also use this command to see how much space is being used on all child directories of a parent, like so:

du -h /media/jack/HALEY

The output of this command (Figure 7) is a good way to find out what subdirectories are hogging up space on a drive.

Figure 7: How much space are my subdirectories using?

The du command is also a great tool to use in order to see a list of directories that are using the most disk space on your system. The way to do this is by piping the output of du to two other commands: sort and head. The command to find out the top 10 directories eating space on a drive would look something like this:

du -a /media/jack | sort -n -r | head -n 10

The output would list out those directories, from largest to least offender (Figure 8).

Figure 8: Our top ten directories using up space on a drive.

Not as hard as you thought

Finding out how much space is being used on your Linux-attached drives is quite simple. As long as your drives are mounted to the Linux system, both df and du will do an outstanding job of reporting the necessary information. With df you can quickly see an overview of how much space is used on a disk and with du you can discover how much space is being used by specific directories. These two tools in combination should be considered must-know for every Linux administrator.

And, in case you missed it, I recently showed how to determine your memory usage on Linux. Together, these tips will go a long way toward helping you successfully manage your Linux servers.

This is a classic article written by Jack Wallen from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

Although there are already a lot of good security features built into Linux-based systems, one very important potential vulnerability can exist when local access is granted – – that is file permission-based issues resulting from a user not assigning the correct permissions to files and directories. So based upon the need for proper permissions, I will go over the ways to assign permissions and show you some examples where modification may be necessary.

Permission Groups

Each file and directory has three user based permission groups:

  • owner – The Owner permissions apply only to the owner of the file or directory, they will not impact the actions of other users.
  • group – The Group permissions apply only to the group that has been assigned to the file or directory, they will not affect the actions of other users.
  • all users – The All Users permissions apply to all other users on the system, this is the permission group that you want to watch the most.

Permission Types

Each file or directory has three basic permission types:

  • read – The Read permission refers to a user’s capability to read the contents of the file.
  • write – The Write permissions refer to a user’s capability to write or modify a file or directory.
  • execute – The Execute permission affects a user’s capability to execute a file or view the contents of a directory.

Viewing the Permissions

You can view the permissions by checking the file or directory permissions in your favorite GUI File Manager (which I will not cover here) or by reviewing the output of the “ls -l” command while in the terminal and while working in the directory which contains the file or folder.

The permission in the command line is displayed as: _rwxrwxrwx 1 owner:group

  1. User rights/Permissions
    1. The first character that I marked with an underscore is the special permission flag that can vary.
    2. The following set of three characters (rwx) is for the owner permissions.
    3. The second set of three characters (rwx) is for the Group permissions.
    4. The third set of three characters (rwx) is for the All Users permissions.
  2. Following that grouping since the integer/number displays the number of hardlinks to the file.
  3. The last piece is the Owner and Group assignment formatted as Owner:Group.

Modifying the Permissions

When in the command line, the permissions are edited by using the command chmod. You can assign the permissions explicitly or by using a binary reference as described below.

Explicitly Defining Permissions

To explicitly define permissions you will need to reference the Permission Group and Permission Types.

The Permission Groups used are:

  • u – Owner
  • g – Group
  • o – Others
  • a – All users

The potential Assignment Operators are + (plus) and – (minus); these are used to tell the system whether to add or remove the specific permissions.

The Permission Types that are used are:

  • r – Read
  • w – Write
  • x – Execute

So for example, let’s say I have a file named file1 that currently has the permissions set to _rw_rw_rw, which means that the owner, group, and all users have read and write permission. Now we want to remove the read and write permissions from the all users group.

To make this modification you would invoke the command: chmod a-rw file1
To add the permissions above you would invoke the command: chmod a+rw file1

As you can see, if you want to grant those permissions you would change the minus character to a plus to add those permissions.

Using Binary References to Set permissions

Now that you understand the permissions groups and types this one should feel natural. To set the permission using binary references you must first understand that the input is done by entering three integers/numbers.

A sample permission string would be chmod 640 file1, which means that the owner has read and write permissions, the group has read permissions, and all other user have no rights to the file.

The first number represents the Owner permission; the second represents the Group permissions; and the last number represents the permissions for all other users. The numbers are a binary representation of the rwx string.

  • r = 4
  • w = 2
  • x = 1

You add the numbers to get the integer/number representing the permissions you wish to set. You will need to include the binary permissions for each of the three permission groups.

So to set a file to permissions on file1 to read _rwxr_____, you would enter chmod 740 file1.

Owners and Groups

I have made several references to Owners and Groups above, but have not yet told you how to assign or change the Owner and Group assigned to a file or directory.

You use the chown command to change owner and group assignments, the syntax is simple

chown owner:group filename,

so to change the owner of file1 to user1 and the group to family you would enter chown user1:family file1.

Advanced Permissions

The special permissions flag can be marked with any of the following:

  • _ – no special permissions
  • d – directory
  • l– The file or directory is a symbolic link
  • s – This indicated the setuid/setgid permissions. This is not set displayed in the special permission part of the permissions display, but is represented as a s in the read portion of the owner or group permissions.
  • t – This indicates the sticky bit permissions. This is not set displayed in the special permission part of the permissions display, but is represented as a t in the executable portion of the all users permissions

Setuid/Setgid Special Permissions

The setuid/setguid permissions are used to tell the system to run an executable as the owner with the owner’s permissions.

Be careful using setuid/setgid bits in permissions. If you incorrectly assign permissions to a file owned by root with the setuid/setgid bit set, then you can open your system to intrusion.

You can only assign the setuid/setgid bit by explicitly defining permissions. The character for the setuid/setguid bit is s.

So do set the setuid/setguid bit on file2.sh you would issue the command chmod g+s file2.sh.

Sticky Bit Special Permissions

The sticky bit can be very useful in shared environment because when it has been assigned to the permissions on a directory it sets it so only file owner can rename or delete the said file.

You can only assign the sticky bit by explicitly defining permissions. The character for the sticky bit is t.

To set the sticky bit on a directory named dir1 you would issue the command chmod +t dir1.

When Permissions Are Important

To some users of Mac- or Windows-based computers, you don’t think about permissions, but those environments don’t focus so aggressively on user-based rights on files unless you are in a corporate environment. But now you are running a Linux-based system and permission-based security is simplified and can be easily used to restrict access as you please.

So I will show you some documents and folders that you want to focus on and show you how the optimal permissions should be set.

  • home directories– The users’ home directories are important because you do not want other users to be able to view and modify the files in another user’s documents of desktop. To remedy this you will want the directory to have the drwx______ (700) permissions, so lets say we want to enforce the correct permissions on the user user1’s home directory that can be done by issuing the command chmod 700 /home/user1.
  • bootloader configuration files– If you decide to implement password to boot specific operating systems then you will want to remove read and write permissions from the configuration file from all users but root. To do you can change the permissions of the file to 700.
  • system and daemon configuration files– It is very important to restrict rights to system and daemon configuration files to restrict users from editing the contents, it may not be advisable to restrict read permissions, but restricting write permissions is a must. In these cases it may be best to modify the rights to 644.
  • firewall scripts – It may not always be necessary to block all users from reading the firewall file, but it is advisable to restrict the users from writing to the file. In this case the firewall script is run by the root user automatically on boot, so all other users need no rights, so you can assign the 700 permissions.

Other examples can be given, but this article is already very lengthy, so if you want to share other examples of needed restrictions please do so in the comments.

This is a classic article written by Jack Wallen from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

It goes without saying that every good Linux desktop environment offers the ability to search your file system for files and folders. If your default desktop doesn’t — because this is Linux — you can always install an app to make searching your directory hierarchy a breeze.

But what about the command line? If you happen to frequently work in the command line or you administer GUI-less Linux servers, where do you turn when you need to locate a file? Fortunately, Linux has exactly what you need to locate the files in question, built right into the system.

The command in question is find. To make the understanding of this command even more enticing, once you know it, you can start working it into your Bash scripts. That’s not only convenience, that’s power.

Let’s get up to speed with the find command so you can take control of locating files on your Linux servers and desktops, without the need of a GUI.

How to use the find command

When I first glimpsed Linux, back in 1997, I didn’t quite understand how the find command worked; therefore, it never seemed to function as I expected. It seemed simple; issue the command find FILENAME (where FILENAME is the name of the file) and the command was supposed to locate the file and report back. Little did I know there was more to the command than that. Much more.

If you issue the command man find, you’ll see the syntax of the find command is:

find [-H] [-L] [-P] [-D debugopts] [-Olevel] [starting-point...] [expression]

Naturally, if you’re unfamiliar with how man works, you might be confused about or overwhelmed by that syntax. For ease of understanding, let’s simplify that. The most basic syntax of a basic find command would look like this:

find /path option filename

Now we’ll see it at work.

Find by name

Let’s break down that basic command to make it as clear as possible. The most simplistic  structure of the find command should include a path for the file, an option, and the filename itself. You may be thinking, “If I know the path to the file, I’d already know where to find it!”. Well, the path for the file could be the root of your drive; so / would be a legitimate path. Entering that as your path would take find longer to process — because it has to start from scratch — but if you have no idea where the file is, you can start from there. In the name of efficiency, it is always best to have at least an idea where to start searching.

The next bit of the command is the option. As with most Linux commands, you have a number of available options. However, we are starting from the beginning, so let’s make it easy. Because we are attempting to find a file by name, we’ll use one of two options:

  • name – case sensitive

  • iname – case insensitive

Remember, Linux is very particular about case, so if you’re looking for a file named Linux.odt, the following command will return no results.

find / -name linux.odt

If, however, you were to alter the command by using the -iname option, the find command would locate your file, regardless of case. So the new command looks like:

find / -iname linux.odt

Find by type

What if you’re not so concerned with locating a file by name but would rather locate all files of a certain type? Some of the more common file descriptors are:

  • f – regular file

  • d – directory

  • l – symbolic link

  • c – character devices

  • b – block devices

Now, suppose you want to locate all block devices (a file that refers to a device) on your system. With the help of the -type option, we can do that like so:

find / -type c

The above command would result in quite a lot of output (much of it indicating permission denied), but would include output similar to:

/dev/hidraw6
/dev/hidraw5
/dev/vboxnetctl
/dev/vboxdrvu
/dev/vboxdrv
/dev/dmmidi2
/dev/midi2
/dev/kvm

Voilà! Block devices.

We can use the same option to help us look for configuration files. Say, for instance, you want to locate all regular files that end in the .conf extension. This command would look something like:

find / -type f -name "*.conf"

The above command would traverse the entire directory structure to locate all regular files ending in .conf. If you know most of your configuration files are housed in /etc, you could specify that like so:

find /etc -type f -name “*.conf”

The above command would list all of your .conf files from /etc (Figure 1).

Figure 1: Locating all of your configuration files in /etc.

Outputting results to a file

One really handy trick is to output the results of the search into a file. When you know the output might be extensive, or if you want to comb through the results later, this can be incredibly helpful. For this, we’ll use the same example as above and pipe the results into a file called conf_search. This new command would look like: ​

find /etc -type f -name “*.conf” > conf_search

You will now have a file (conf_search) that contains all of the results from the find command issued.

Finding files by size

Now we get to a moment where the find command becomes incredibly helpful. I’ve had instances where desktops or servers have found their drives mysteriously filled. To quickly make space (or help locate the problem), you can use the find command to locate files of a certain size. Say, for instance, you want to go large and locate files that are over 1000MB. The find command can be issued, with the help of the -size option, like so:

find / -size +1000MB

You might be surprised at how many files turn up. With the output from the command, you can comb through the directory structure and free up space or troubleshoot to find out what is mysteriously filling up your drive.

You can search with the following size descriptions:

  • c – bytes

  • k – Kilobytes

  • M – Megabytes

  • G – Gigabytes

  • b – 512-byte blocks

Keep learning

We’ve only scratched the surface of the find command, but you now have a fundamental understanding of how to locate files on your Linux systems. Make sure to issue the command man find to get a deeper, more complete, knowledge of how to make this powerful tool work for you.

This is a classic article written by Jack Wallen from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

Picture this: You’ve launched an application (be it from your favorite desktop menu or from the command line) and you start using that launched app, only to have it lock up on you, stop performing, or unexpectedly die. You try to run the app again, but it turns out the original never truly shut down completely.

What do you do? You kill the process. But how? Believe it or not, your best bet most often lies within the command line. Thankfully, Linux has every tool necessary to empower you, the user, to kill an errant process. However, before you immediately launch that command to kill the process, you first have to know what the process is. How do you take care of this layered task? It’s actually quite simple…once you know the tools at your disposal.

Let me introduce you to said tools.

The steps I’m going to outline will work on almost every Linux distribution, whether it is a desktop or a server. I will be dealing strictly with the command line, so open up your terminal and prepare to type.

Locating the process

The first step in killing the unresponsive process is locating it. There are two commands I use to locate a process: top and ps. Top is a tool every administrator should get to know. With top, you get a full listing of currently running process. From the command line, issue top to see a list of your running processes (Figure 1).

Figure 1: The top command gives you plenty of information.

From this list you will see some rather important information. Say, for example, Chrome has become unresponsive. According to our top display, we can discern there are four instances of chrome running with Process IDs (PID) 3827, 3919, 10764, and 11679. This information will be important to have with one particular method of killing the process.

Although top is incredibly handy, it’s not always the most efficient means of getting the information you need. Let’s say you know the Chrome process is what you need to kill, and you don’t want to have to glance through the real-time information offered by top. For that, you can make use of the ps command and filter the output through grep. The ps command reports a snapshot of a current process and grep prints lines matching a pattern. The reason why we filter ps through grep is simple: If you issue the ps command by itself, you will get a snapshot listing of all current processes. We only want the listing associated with Chrome. So this command would look like:

ps aux | grep chrome

The aux options are as follows:

  • a = show processes for all users

  • u = display the process’s user/owner

  • x = also show processes not attached to a terminal

The x option is important when you’re hunting for information regarding a graphical application.

When you issue the command above, you’ll be given more information than you need (Figure 2) for the killing of a process, but it is sometimes more efficient than using top.

Figure 2: Locating the necessary information with the ps command.

Killing the process

Now we come to the task of killing the process. We have two pieces of information that will help us kill the errant process:

  • Process name
  • Process ID

Which you use will determine the command used for termination. There are two commands used to kill a process:

  • kill – Kill a process by ID
  • killall – Kill a process by name

There are also different signals that can be sent to both kill commands. What signal you send will be determined by what results you want from the kill command. For instance, you can send the HUP (hang up) signal to the kill command, which will effectively restart the process. This is always a wise choice when you need the process to immediately restart (such as in the case of a daemon). You can get a list of all the signals that can be sent to the kill command by issuing kill -l. You’ll find quite a large number of signals (Figure 3).

Figure 3: The available kill signals.

The most common kill signals are:

Signal Name

Single Value

Effect

SIGHUP

1

Hangup

SIGINT

2

Interrupt from keyboard

SIGKILL

9

Kill signal

SIGTERM

15

Termination signal

SIGSTOP

17, 19, 23

Stop the process

What’s nice about this is that you can use the Signal Value in place of the Signal Name. So you don’t have to memorize all of the names of the various signals.
So, let’s now use the kill command to kill our instance of chrome. The structure for this command would be:

kill SIGNAL PID

Where SIGNAL is the signal to be sent and PID is the Process ID to be killed. We already know, from our ps command that the IDs we want to kill are 3827, 3919, 10764, and 11679. So to send the kill signal, we’d issue the commands:

kill -9 3827

kill -9 3919

kill -9 10764

kill -9 11679

Once we’ve issued the above commands, all of the chrome processes will have been successfully killed.

Let’s take the easy route! If we already know the process we want to kill is named chrome, we can make use of the killall command and send the same signal the process like so:

killall -9 chrome

The only caveat to the above command is that it may not catch all of the running chrome processes. If, after running the above command, you issue the ps aux|grep chrome command and see remaining processes running, your best bet is to go back to the kill command and send signal 9 to terminate the process by PID.

Ending processes made easy

As you can see, killing errant processes isn’t nearly as challenging as you might have thought. When I wind up with a stubborn process, I tend to start off with the killall command as it is the most efficient route to termination. However, when you wind up with a really feisty process, the kill command is the way to go.

Our recently published Open Source Jobs Report examined the demand for open source talent and trends among open source professionals. What did we find?

Open Source Career Opportunities are Strong

The good news is that hiring is rebounding in the wake of the pandemic, as organizations look to continue their investments in digital transformation. This is evidenced by 50% of employers surveyed who stated they are increasing hires this year. There are significant challenges though, with 92% of managers having difficulty finding enough talent and struggling to hold onto existing talent in the face of fierce competition. Other key findings from this year’s report included:

  • Cloud is on the rise. Cloud and container technology skills are most in-demand by hiring managers, surpassing Linux for the first time, with 46% of hiring managers seeking cloud talent.
  • DevOps has become the standard method for developing software. Virtually all open source professionals (88%) report using DevOps practices in their work, a 50% increase from three years ago.
  • Demand for certified talent is spiking. Managers are prioritizing hires of certified talent (88%).
  • Training is increasingly helping close skills gaps. 92% of managers report increasing requests for training. Employers also report that they prioritize training investments to close skills gaps, with 58% using this tactic.
  • Discrimination is a growing concern in the community. Open source professionals having been discriminated against or made to feel unwelcome in the community increased to 18% in 2021 — a 125% increase over the past three years.

Enabling Training and Certification

This year, ‬vendor-neutral training and certification grew in importance as demand for professionals with critical skills in open cloud technologies and DevOps increased‭.‬ Over 2 million individuals have enrolled in free Linux Foundation training courses, providing them a great way to explore different open source technologies and decide which is the best fit for them; this includes over a million students who have enrolled in our Introduction to Linux course on the edX platform. To date, over 50,000 individuals have been certified for their technical competence through Linux Foundation programs.

This year, our Training & Certification team launched over 20 new offerings. We now host over 70 eLearning courses, deliver over 20 instructor-led courses, and offer more than a dozen certification exams that enable certified professionals to demonstrate their skills, with more being released regularly. 

This year saw the addition of exam simulators to our Kubernetes certification exams, enabling exam registrants to familiarize themselves with the exam environment before sitting for their exam. In late 2021, we will launch a new Kubernetes and Cloud Native Associate certification exam, which will serve as an entry-level certification for new cloud professionals.

In 2021, The Linux Foundation directly awarded 500 scholarships for free training and certification to individuals worldwide. Hundreds more were awarded via partnerships with nonprofits, including Blacks in Technology, TransTech Social Enterprises, and Women Who Code.

New training and certification offerings launched in 2021 include:

  • Building a RISC-V CPU Core
  • Certified Kubernetes and Cloud Native Associate (KCNA)
  • Certified TARS Application Developer (CTAD)
  • FinOps for Engineering
  • Generating a Software Bill of Materials
  • GitOps: Continuous Delivery on Kubernetes with Flux
  • Hyperledger Besu Essentials:
  • Creating a Private Blockchain Network
  • Kubernetes and Cloud Native Essentials
  • Kubernetes Security Essentials
  • Kubernetes Security Fundamentals
  • Implementing DevSecOps
  • Introduction to Cloud Foundry
  • Introduction to FDC3 Standard
  • Introduction to GitOps
  • Introduction to Kubernetes on Edge with K3s
  • Introduction to Magma:
  • Cloud Native Wireless Networking
  • Introduction to Node.js
  • Introduction to RISC-V
  • Introduction to WebAssembly
  • Open Source Management and Strategy
  • RISC-V Toolchain and Compiler Optimization
  • Techniques
  • WebAssembly Actors: From Cloud to Edge

Explore the full catalog of courses at training.linuxfoundation.org/full-catalog.