This is a classic article written by Jack Wallen from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

Many years ago, when I first began with Linux, installing applications and keeping a system up to date was not an easy feat. In fact, if you wanted to tackle either task you were bound for the command line. For some new users this left their machines outdated or without applications they needed. Of course, at the time, most everyone trying their hand at Linux knew they were getting into something that would require some work. That was simply the way it was. Fortunately times and Linux have changed. Now Linux is exponentially more user friendly – to the point where so much is automatic and point and click – that today’s Linux hardly resembles yesterday’s Linux.

But even though Linux has evolved into the user-friendly operating system it is, there are still some systems that are fundamentally different than their Windows counterparts. So it is always best to understand those systems in order to be able to properly use those system. Within the confines of this article you will learn how to keep your Linux system up to date. In the process you might also learn how to install an application or two.

There is one thing to understand about updating Linux: Not every distribution handles this process in the same fashion. In fact, some distributions are distinctly different down to the type of file types they use for package management.

  • Ubuntu and Debian use .deb
  • Fedora, SuSE, and Mandriva use .rpm
  • Slackware uses .tgz archives which contain pre-built binaries
  • And of course there is also installing from source or pre-compiled .bin or .package files.
As you can see there are number of possible systems (and the above list is not even close to being all-inclusive). So to make the task of covering this topic less epic, I will cover the Ubuntu and Fedora systems. I will touch on both the GUI as well as the command line tools for handling system updates.

Ubuntu Linux

Ubuntu Linux has become one of the most popular of all the Linux distributions. And through the process of updating a system, you should be able to tell exactly why this is the case. Ubuntu is very user friendly. Ubuntu uses two different tools for system update:
  • apt-get: Command line tool.
  • Update Manager: GUI tool.
Ubuntu Update ManagerThe Update Manger is a nearly 100% automatic tool. With this tool you will not have to routinely check to see if there are updates available. Instead you will know updates are available because the Update Manager will open on your desktop (see Figure 1) as soon as the updates depending upon their type:
  • Security updates: Daily
  • Non-security updates: Weekly

If you want to manually check for updates, you can do this by clicking the Administration sub-menu of the System menu and then selecting the Update Manager entry. When the Update Manager opens click the Check button to see if there are updates available.

Figure 1 shows a listing of updates for a Ubuntu 9.10 installation. As you can see there are both Important Security Updates as well as Recommended Update. If you want to get information about a particular update you can select the update and then click on the Description of update dropdown.
In order to update the packages follow these steps:
  1. Check the updates you want to install. By default all updates are selected.
  2. Click the Install Updates button.
  3. Enter your user (sudo) password.
  4. Click OK.

The updates will proceed and you can continue on with  your work. Now some updates may require either you to log out of your desktop and log back in, or to reboot the machine. There are is a new tool in development (Ksplice) that allow even the update of a kernel to not require a reboot.
Once all of the updates are complete the Update Manage main window will return reporting that Your system is up to date.

Updating via command lineNow let’s take a look at the command line tools for updating your system. The Ubuntu package management system is called apt. Apt is a very powerful tool that can completely manage your systems packages via command line. Using the command line tool has one drawback – in order to check to see if you have updates, you have to run it manually. Let’s take a look at how to update your system with the help of Apt. Follow these steps:

  1. Open up a terminal window.
  2. Issue the command sudo apt-get upgrade.
  3. Enter your user’s password.
  4. Look over the list of available updates (see Figure 2) and decide if you want to go through with the entire upgrade.
  5. To accept all updates click the ‘y’ key (no quotes) and hit Enter.
  6. Watch as the update happens.

That’s it. Your system is now up to date. Let’s take a look at how the same process happens on Fedora (Fedora 12 to be exact).

Fedora Linux

Fedora is a direct descendant of Red Hat Linux, so it is the beneficiary of the Red Hat Package Management system (rpm). Like Ubuntu, Fedora can be upgraded by:
  • yum: Command line tool.
  • GNOME (or KDE) PackageKit: GUI tool.

GNOME PackageKitDepending upon your desktop, you will either use the GNOME or the KDE front-end for PackageKit. In order to open up this tool you simply go to the Administration sub-menu of the System menu and select the Software Update entry.  When the tool opens (see Figure 3) you will see the list of updates. To get information about a particular update all you need to do is to select a specific package and the information will be displayed in the bottom pane.

To go ahead with the update click the Install Updates button. As the process happens a progress bar will indicate where GNOME (or KDE) PackageKit is in the steps. The steps are:

  1. Resolving dependencies.
  2. Downloading packages.
  3. Testing changes.
  4. Installing updates.

When the process is complete, GNOME (or KDE) PackageKit will report that your system is update. Click the OK button when prompted.

Now let’s take a look at upgrading Fedora via the command line. As stated earlier, this is done with the help of the yum command. In order to take care of this, follow these steps:

Updating with the help of yum

  1. Open up a terminal window (Do this by going to the System Tools sub-menu of the Applications menu and select Terminal).
  2. Enter the su command to change to the super user.
  3. Type your super user password and hit Enter.
  4. Issue the command yum update and yum will check to see what packages are available for update.
  5. Look through the listing of updates (see Figure 4).
  6. If you want to go through with the update enter ‘y’ (no quotes) and hit Enter.
  7. Sit back and watch the updates happen.
  8. Exit out of the root user command prompt by typing “exit” (no quotes) and hitting Enter.
  9. Close the terminal when complete.

Your Fedora system is now up to date.

Final Thoughts

Granted only two distributions were touched on here, but this should illustrate how easily a Linux installation is updated. Although the tools might not be universal, the concepts are. Whether you are using Ubuntu, OpenSuSE, Slackware, Fedora, Mandriva, or anything in-between, the above illustrations should help you through updating just about any Linux distribution. And hopefully this tutorial helps to show you just how user-friendly the Linux operating system has become.

Ready to continue your Linux journey? Check out our free intro to Linux course!

This is a classic article written by Joe “Zonker” Brockmeier from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

The first step is often the hardest, but don’t let that stop you. If you’ve ever wanted to learn how to write a shell script but didn’t know where to start, this is your lucky day.

If this is your first time writing a script, don’t worry — shell scripting is not that complicated. That is, you can do some complicated things with shell scripts, but you can get there over time. If you know how to run commands at the command line, you can learn to write simple scripts in just 10 minutes. All you need is a text editor and an idea of what you want to do. Start small and use scripts to automate small tasks. Over time you can build on what you know and wind up doing more and more with scripts.

Starting Off

Each script starts with a “shebang” and the path to the shell that you want the script to use, like so:

#!/bin/bash

The “#!” combo is called a shebang by most Unix geeks. This is used by the shell to decide which interpreter to run the rest of the script, and ignored by the shell that actually runs the script. Confused? Scripts can be written for all kinds of interpreters — bash, tsch, zsh, or other shells, or for Perl, Python, and so on. You could even omit that line if you wanted to run the script by sourcing it at the shell, but let’s save ourselves some trouble and add it to allow scripts to be run non-interactively.

What’s next? You might want to include a comment or two about what the script is for. Preface comments with the hash (#) character:

#!/bin/bash
# A simple script

Let’s say you want to run an rsync command from the script, rather than typing it each time. Just add the rsync command to the script that you want to use:

#!/bin/bash
# rsync script
rsync -avh --exclude="*.bak" /home/user/Documents/ /media/diskid/user_backup/Documents/

Save your file, and then make sure that it’s set executable. You can do this using the chmod utility, which changes a file’s mode. To set it so that a script is executable by you and not the rest of the users on a system, use “chmod 700 scriptname” — this will let you read, write, and execute (run) the script — but only your user. To see the results, run ls -lh scriptname and you’ll see something like this:

-rwx------ 1 jzb jzb   21 2010-02-01 03:08 echo

The first column of rights, rwx, shows that the owner of the file (jzb) has read, write, and execute permissions. The other columns with a dash show that other users have no rights for that file at all.

Variables

The above script is useful, but it has hard-coded paths. That might not be a problem, but if you want to write longer scripts that reference paths often, you probably want to utilize variables. Here’s a quick sample:

#!/bin/bash
# rsync using variables

SOURCEDIR=/home/user/Documents/
DESTDIR=/media/diskid/user_backup/Documents/

rsync -avh --exclude="*.bak" $SOURCEDIR $DESTDIR

There’s not a lot of benefit if you only reference the directories once, but if they’re used multiple times, it’s much easier to change them in one location than changing them throughout a script.

Taking Input

Non-interactive scripts are useful, but what if you need to give the script new information each time it’s run? For instance, what if you want to write a script to modify a file? One thing you can do is take an argument from the command line. So, for instance, when you run “script foo” the script will take the name of the first argument (foo):

#!/bin/bash

echo $1

Here bash will read the command line and echo (print) the first argument — that is, the first string after the command itself.

You can also use read to accept user input. Let’s say you want to prompt a user for input:

#!/bin/bash

echo -e "Please enter your name: "
read name
echo "Nice to meet you $name"

That script will wait for the user to type in their name (or any other input, for that matter) and use it as the variable $name. Pretty simple, yeah? Let’s put all this together into a script that might be useful. Let’s say you want to have a script that will back up a directory you specify on the command line to a remote host:

#!/bin/bash

echo -e "What directory would you like to back up?" 
read directory

DESTDIR=
 This e-mail address is being protected from spambots. You need JavaScript enabled to view it
 :$directory/

rsync --progress -avze ssh --exclude="*.iso" $directory $DESTDIR

That script will read in the input from the command line and substitute it as the destination directory at the target system, as well as the local directory that will be synced. It might look a bit complex as a final script, but each of the bits that you need to know to put it together are pretty simple. A little trial and error and you’ll be creating useful scripts of your own.

Of course, this is just scraping the surface of bash scripting. If you’re interested in learning more, be sure to check out the Bash Guide for Beginners.

Ready to continue your Linux journey? Check out our free intro to Linux course!

LiFT scholarships are open

It’s that time of year – Linux Foundation Training (LiFT) Scholarships are here! Since 2011, The Linux Foundation has awarded over 1,100 scholarships for millions of dollars in training and certification to deserving individuals around the world who would otherwise be unable to afford it. This is part of our mission to grow the open source community by lowering the barrier to entry and making quality training options accessible to those who want them.

Applications are being accepted through April 30 in 12 different categories:

  • Open Source Newbies
  • Teens-in-Training
  • Women in Open Source
  • Software Developer Do-Gooder
  • SysAdmin Super Star
  • Blockchain Blockbuster
  • Cloud Captain
  • Linux Kernel Guru
  • Networking Notable
  • Web Development Wiz
  • Hardware Hero – NEW
  • Cybersecurity Champion – NEW

Whether you are just starting in your open source career, or you are a veteran developer or sysadmin who is looking to gain new skills, if you feel you can benefit from training and/or certification but cannot afford it, you should apply.

Recipients will receive a Linux Foundation eLearning training course and certification exam. All certification exams, and most training courses, are offered remotely, meaning they can be completed from anywhere.

Winners will be announced this summer.

Meet past winners

Apply today!

MLH production engineering scholarships

For the second summer, Major League Hacking (MLH) is running the Production Engineering Track of the MLH Fellowship, powered by Meta. This 12-week educational program is 100% remote and uses industry-leading curriculum from Linux Foundation Training & Certification.  The program is hands-on, project-based, and teaches students how to become Production Engineers.  The goal of the program is for all participants to land a job or internship in the Site Reliability Engineering space, and it will be opened to 100 active college students who meet our admissions criteria.

This Summer’s program will start on May 31, 2022 and will end on August 19, 2022.

Applications are now open and will close on May 23, 2022!

Apply Now!

What is Production Engineering?

Production Engineering, also known as Site Reliability Engineering and DevOps, is one of the most in-demand skill sets that leading technology companies are hiring for. However, it is not widely available as a class offering in university settings.

At Meta, Production Engineers (PEs) are a hybrid between software and systems engineers and are core to engineering efforts that keep Meta platforms running and scaling. PEs work within Meta’s product and infrastructure teams to make sure products and services are reliable and scalable; this means, writing code and debugging hard problems in production systems across Meta services – like Instagram, WhatsApp, and Oculus – and backend services like Storage, Cache, and Network.

What is the Production Engineering Track of the MLH Fellowship?

Launched in the summer of 2020, the MLH Fellowship first focused on Open Source Software projects, pairing early career software engineers with projects and engineers from widely-used open source codebases (like AWS, GitHub, and Solana Labs). During the program, Fellows learned important concepts and software practices while contributing production-level code to their projects and showcasing those contributions in their portfolio. Through the Fellowship, 700 global alumni have learned Open Source skills and tools and increased their professional networks in the process.

The Production Engineering Track takes this proven fellowship model and expands on it. As part of the Production Engineering Track, fellows are put in groups of 10 (“Pods”), matched to dedicated mentors from Meta Engineering while they work through projects and curriculum, and receive guidance from Meta’s Talent Acquisition team, too. Successful program graduates will be invited to apply to full-time Meta internships.

What will admitted fellows learn in the Production Engineering Track?

Program participants will gain practical skills from educational content – adopted by the MLH Curriculum Team – licensed from the Linux Foundation’s “Essentials of System Administration” course. The program covers how to administer, configure and upgrade Linux systems, along with the tools and concepts necessary to build and manage a production Linux infrastructure. The complete list of topics covered in the program includes:

  • Linux Fundamentals
  • Scripting
  • Databases
  • Services
  • Testing
  • Containers
  • CI/CD
  • Monitoring
  • Networking
  • Troubleshooting
  • Interview skills

By pairing this industry-leading curriculum with hands-on, project-based learning – and engineering mentors from Meta – fellows in the Production Engineering Track greatly build on their programming knowledge. Fellows will learn a broader array of technology skills, opening the door to new career options in SRE.

What are the important dates I should know about?

The program will be available to roughly 100 aspiring software engineers and will start on May 31, 2022 and end on August 19, 2022.

Applications are now open and will close on May 23, 2022!

Will I get paid as part of the program?

Each successful participant will earn an educational stipend adjusted for Purchasing Power Parity for the country they’re located in.

Who is eligible?

Eligible students are:

  • Rising sophomores or juniors enrolled in a 4 year degree granting program
  • United States, Mexico, or Canada-based
  • Able to code in at least one language (preferably Python)
  • Can dedicate at least 30 hours/week for the 12-weeks of the program

MLH invites and encourages people to apply who identify as women or non-binary. MLH also invites and encourages people to apply who identify as Black/African American or LatinX. In partnership with Meta, MLH is committed to building a more diverse and inclusive tech industry and providing learning opportunities to under-represented technologists.

Apply Now!

This article was originally posted at Major League Hacking.

This is a classic article written by Jack Wallen from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

At some point in your career as a Linux administrator, you are going to have to view log files. After all, they are there for one very important reason…to help you troubleshoot an issue. In fact, every seasoned administrator will immediately tell you that the first thing to be done, when a problem arises, is to view the logs.

And there are plenty of logs to be found: logs for the system, logs for the kernel, for package managers, for Xorg, for the boot process, for Apache, for MySQL… For nearly anything you can think of, there is a log file.

Most log files can be found in one convenient location: /var/log. These are all system and service logs, those which you will lean on heavily when there is an issue with your operating system or one of the major services. For desktop app-specific issues, log files will be written to different locations (e.g., Thunderbird writes crash reports to ‘~/.thunderbird/Crash Reports’). Where a desktop application will write logs will depend upon the developer and if the app allows for custom log configuration.

We are going to be focus on system logs, as that is where the heart of Linux troubleshooting lies. And the key issue here is, how do you view those log files?

Fortunately there are numerous ways in which you can view your system logs, all quite simply executed from the command line.

/var/log

This is such a crucial folder on your Linux systems. Open up a terminal window and issue the command cd /var/log. Now issue the command ls and you will see the logs housed within this directory (Figure 1).

Figure 1: A listing of log files found in /var/log/.

Now, let’s take a peek into one of those logs.

Viewing logs with less

One of the most important logs contained within /var/log is syslog. This particular log file logs everything except auth-related messages. Say you want to view the contents of that particular log file. To do that, you could quickly issue the command less /var/log/syslog. This command will open the syslog log file to the top. You can then use the arrow keys to scroll down one line at a time, the spacebar to scroll down one page at a time, or the mouse wheel to easily scroll through the file.

The one problem with this method is that syslog can grow fairly large; and, considering what you’re looking for will most likely be at or near the bottom, you might not want to spend the time scrolling line or page at a time to reach that end. Will syslog open in the less command, you could also hit the [Shift]+[g] combination to immediately go to the end of the log file. The end will be denoted by (END). You can then scroll up with the arrow keys or the scroll wheel to find exactly what you want.

This, of course, isn’t terribly efficient.

Viewing logs with dmesg

The dmesg command prints the kernel ring buffer. By default, the command will display all messages from the kernel ring buffer. From the terminal window, issue the command dmesg and the entire kernel ring buffer will print out (Figure 2).

Figure 2: A USB external drive displaying an issue that may need to be explored.

Fortunately, there is a built-in control mechanism that allows you to print out only certain facilities (such as daemon).

Say you want to view log entries for the user facility. To do this, issue the command dmesg –facility=user. If anything has been logged to that facility, it will print out.

Unlike the less command, issuing dmesg will display the full contents of the log and send you to the end of the file. You can always use your scroll wheel to browse through the buffer of your terminal window (if applicable). Instead, you’ll want to pipe the output of dmesg to the less command like so:

dmesg | less

The above command will print out the contents of dmesg and allow you to scroll through the output just as you did viewing a standard log with the less command.

Viewing logs with tail

The tail command is probably one of the single most handy tools you have at your disposal for the viewing of log files. What tail does is output the last part of files. So, if you issue the command tail /var/log/syslog, it will print out only the last few lines of the syslog file.

But wait, the fun doesn’t end there. The tail command has a very important trick up its sleeve, by way of the -f option. When you issue the command tail -f /var/log/syslog, tail will continue watching the log file and print out the next line written to the file. This means you can follow what is written to syslog, as it happens, within your terminal window (Figure 3).
Figure 3: Following /var/log/syslog using the tail command.

Using tail in this manner is invaluable for troubleshooting issues.

To escape the tail command (when following a file), hit the [Ctrl]+[x] combination.

You can also instruct tail to only follow a specific amount of lines. Say you only want to view the last five lines written to syslog; for that you could issue the command:

tail -f -n 5 /var/log/syslog

The above command would follow input to syslog and only print out the most recent five lines. As soon as a new line is written to syslog, it would remove the oldest from the top. This is a great way to make the process of following a log file even easier. I strongly recommend not using this to view anything less than four or five lines, as you’ll wind up getting input cut off and won’t get the full details of the entry.

There are other tools

You’ll find plenty of other commands (and even a few decent GUI tools) to enable the viewing of log files. Look to more, grep, head, cat, multitail, and System Log Viewer to aid you in your quest to troubleshooting systems via log files.

Advance your career with Linux system administration skills. Check out the Essentials of System Administration course from The Linux Foundation.

This is a classic article written by Carla Schroder from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

grub command shell

Once upon a time we had legacy GRUB, the Grand Unified Linux Bootloader version 0.97. Legacy GRUB had many virtues, but it became old and its developers did yearn for more functionality, and thus did GRUB 2 come into the world.

GRUB 2 is a major rewrite with several significant differences. It boots removable media, and can be configured with an option to enter your system BIOS. It’s more complicated to configure with all kinds of scripts to wade through, and instead of having a nice fairly simple /boot/grub/menu.lst file with all configurations in one place, the default is /boot/grub/grub.cfg. Which you don’t edit directly, oh no, for this is not for mere humans to touch, but only other scripts. We lowly humans may edit /etc/default/grub, which controls mainly the appearance of the GRUB menu. We may also edit the scripts in /etc/grub.d/. These are the scripts that boot your operating systems, control external applications such as memtest and os_prober, and theming./boot/grub/grub.cfg is built from /etc/default/grub and /etc/grub.d/* when you run the update-grub command, which you must run every time you make changes.

The good news is that the update-grub script is reliable for finding kernels, boot files, and adding all operating systems to your GRUB boot menu, so you don’t have to do it manually.

We’re going to learn how to fix two of the more common failures. When you boot up your system and it stops at the grub> prompt, that is the full GRUB 2 command shell. That means GRUB 2 started normally and loaded the normal.mod module (and other modules which are located in /boot/grub/[arch]/), but it didn’t find your grub.cfg file. If you see grub rescue> that means it couldn’t find normal.mod, so it probably couldn’t find any of your boot files.

How does this happen? The kernel might have changed drive assignments or you moved your hard drives, you changed some partitions, or installed a new operating system and moved things around. In these scenarios your boot files are still there, but GRUB can’t find them. So you can look for your boot files at the GRUB prompt, set their locations, and then boot your system and fix your GRUB configuration.

GRUB 2 Command Shell

The GRUB 2 command shell is just as powerful as the shell in legacy GRUB. You can use it to discover boot images, kernels, and root filesystems. In fact, it gives you complete access to all filesystems on the local machine regardless of permissions or other protections. Which some might consider a security hole, but you know the old Unix dictum: whoever has physical access to the machine owns it.

When you’re at the grub> prompt, you have a lot of functionality similar to any command shell such as history and tab-completion. The grub rescue> mode is more limited, with no history and no tab-completion.

If you are practicing on a functioning system, press C when your GRUB boot menu appears to open the GRUB command shell. You can stop the bootup countdown by scrolling up and down your menu entries with the arrow keys. It is safe to experiment at the GRUB command line because nothing you do there is permanent. If you are already staring at the grub> or grub rescue>prompt then you’re ready to rock.

The next few commands work with both grub> and grub rescue>. The first command you should run invokes the pager, for paging long command outputs:

grub> set pager=1

There must be no spaces on either side of the equals sign. Now let’s do a little exploring. Type ls to list all partitions that GRUB sees:

grub> ls
(hd0) (hd0,msdos2) (hd0,msdos1)

What’s all this msdos stuff? That means this system has the old-style MS-DOS partition table, rather than the shiny new Globally Unique Identifiers partition table (GPT). (See Using the New GUID Partition Table in Linux (Goodbye Ancient MBR). If you’re running GPT it will say (hd0,gpt1). Now let’s snoop. Use the ls command to see what files are on your system:

grub> ls (hd0,1)/
lost+found/ bin/ boot/ cdrom/ dev/ etc/ home/  lib/
lib64/ media/ mnt/ opt/ proc/ root/ run/ sbin/ 
srv/ sys/ tmp/ usr/ var/ vmlinuz vmlinuz.old 
initrd.img initrd.img.old

Hurrah, we have found the root filesystem. You can omit the msdos and gpt labels. If you leave off the slash it will print information about the partition. You can read any file on the system with the cat command:

grub> cat (hd0,1)/etc/issue
Ubuntu 14.04 LTS n l

Reading /etc/issue could be useful on a multi-boot system for identifying your various Linuxes.

Booting From grub>

This is how to set the boot files and boot the system from the grub> prompt. We know from running the ls command that there is a Linux root filesystem on (hd0,1), and you can keep searching until you verify where /boot/grub is. Then run these commands, using your own root partition, kernel, and initrd image:

grub> set root=(hd0,1)
grub> linux /boot/vmlinuz-3.13.0-29-generic root=/dev/sda1
grub> initrd /boot/initrd.img-3.13.0-29-generic
grub> boot

The first line sets the partition that the root filesystem is on. The second line tells GRUB the location of the kernel you want to use. Start typing /boot/vmli, and then use tab-completion to fill in the rest. Type root=/dev/sdX to set the location of the root filesystem. Yes, this seems redundant, but if you leave this out you’ll get a kernel panic. How do you know the correct partition? hd0,1 = /dev/sda1. hd1,1 = /dev/sdb1. hd3,2 = /dev/sdd2. I think you can extrapolate the rest.

The third line sets the initrd file, which must be the same version number as the kernel.

The fourth line boots your system.

On some Linux systems the current kernels and initrds are symlinked into the top level of the root filesystem:

$ ls -l /
vmlinuz -> boot/vmlinuz-3.13.0-29-generic
initrd.img -> boot/initrd.img-3.13.0-29-generic

So you could boot from grub> like this:

grub> set root=(hd0,1)
grub> linux /vmlinuz root=/dev/sda1
grub> initrd /initrd.img
grub> boot

Booting From grub-rescue>

If you’re in the GRUB rescue shell the commands are different, and you have to load the normal.mod andlinux.mod modules:

grub rescue> set prefix=(hd0,1)/boot/grub
grub rescue> set root=(hd0,1)
grub rescue> insmod normal
grub rescue> normal
grub rescue> insmod linux
grub rescue> linux /boot/vmlinuz-3.13.0-29-generic root=/dev/sda1
grub rescue> initrd /boot/initrd.img-3.13.0-29-generic
grub rescue> boot

Tab-completion should start working after you load both modules.

Making Permanent Repairs

When you have successfully booted your system, run these commands to fix GRUB permanently:

# update-grub
Generating grub configuration file ...
Found background: /usr/share/images/grub/Apollo_17_The_Last_Moon_Shot_Edit1.tga
Found background image: /usr/share/images/grub/Apollo_17_The_Last_Moon_Shot_Edit1.tga
Found linux image: /boot/vmlinuz-3.13.0-29-generic
Found initrd image: /boot/initrd.img-3.13.0-29-generic
Found linux image: /boot/vmlinuz-3.13.0-27-generic
Found initrd image: /boot/initrd.img-3.13.0-27-generic
Found linux image: /boot/vmlinuz-3.13.0-24-generic
Found initrd image: /boot/initrd.img-3.13.0-24-generic
Found memtest86+ image: /boot/memtest86+.elf
Found memtest86+ image: /boot/memtest86+.bin
done
# grub-install /dev/sda
Installing for i386-pc platform.
Installation finished. No error reported.

When you run grub-install remember you’re installing it to the boot sector of your hard drive and not to a partition, so do not use a partition number like /dev/sda1.

But It Still Doesn’t Work

If your system is so messed up that none of this works, try the Super GRUB2 live rescue disk. The official GNU GRUB Manual 2.00 should also be helpful.

Advance your career with Linux system administration skills. Check out the Essentials of System Administration course from The Linux Foundation.

This is a classic article written by Swapnil Bhartiya from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

If you run a live or home server, moving files between local machines or two remote machines is a basic requirement. There are many ways to achieve that. In this article, we talk about scp (secure copy command) that encrypts the transferred file and password so no one can snoop. With scp you don’t have to start an FTP session or log into the system.

The scp tool relies on SSH (Secure Shell) to transfer files, so all you need is the username and password for the source and target systems. Another advantage is that with SCP you can move files between two remote servers, from your local machine in addition to transferring data between local and remote machines. In that case you need usernames and passwords for both servers. Unlike Rsync, you don’t have to log into any of the servers to transfer data from one machine to another.

This tutorial is aimed at new Linux users, so I will keep things as simple as possible. Let’s get started.

Copy a single file from the local machine to a remote machine:

The scp command needs a source and destination to copy files from one location to another location. This is the pattern that we use:

scp localmachine/path_to_the_file username@server_ip:/path_to_remote_directory

In the following example I am copying a local file from my macOS system to my Linux server (Mac OS, being a UNIX operating system has native support for all UNIX/Linux tools).

scp /Volumes/MacDrive/Distros/fedora.iso 
swapnil@10.0.0.75:/media/prim_5/media_server/

Here, ‘swapnil’ is the user on the server and 10.0.0.75 is the server IP. It will ask you to provide the password for that user, and then copy the file securely.

I can do the same from my local Linux machine:

scp /home/swapnil/Downloads/fedora.iso swapnil@10.0.0.75:/media/prim_5/media_server/

If you are running Windows 10, then you can use Ubuntu bash on Windows to copy files from the Windows system to Linux server:

scp /mnt/c/Users/swapnil/Downloads/fedora.iso swapnil@10.0.0.75:/media/prim_5/
  media_server/

Copy a local directory to a remote server:

If you want to copy the entire local directory to the server, then you can add the -r flag to the command:

scp -r localmachine/path_to_the_directory username@server_ip:/path_to_remote_directory/

Make sure that the source directory doesn’t have a forward slash at the end of the path, at the same time the destination path *must* have a forward slash.

Copy all files in a local directory to a remote directory

What if you only want to copy all the files inside a local directory to a remote directory? It’s simply, just add a forward slash and * at the end of source directory and give the path of destination directory. Don’t forget to add the -r flag to the command:

scp -r localmachine/path_to_the_directory/* username@server_ip:/path_to_remote_directory/

Copying files from remote server to local machine

If you want to make a copy of a single file, a directory or all files on the server to the local machine, just follow the same example above, just exchange the place of source and destination.

Copy a single file:

scp username@server_ip:/path_to_remote_directory local_machine/path_to_the_file

Copy a remote directory to a local machine:

scp -r username@server_ip:/path_to_remote_directory local-machine/path_to_the_directory/

Make sure that the source directory doesn’t have a forward slash at the end of the path, at the same time the destination path *must* have a forward slash.

Copy all files in a remote directory to a local directory:

scp -r username@server_ip:/path_to_remote_directory/* local-machine/path_to_the_directory/

Copy files from one directory of the same server to another directory securely from local machine

Usually I ssh into that machine and then use rsync command to perform the job, but with SCP, I can do it easily without having to log into the remote server.

Copy a single file:

scp username@server_ip:/path_to_the_remote_file username@server_ip:/
  path_to_destination_directory/

Copy a directory from one location on remote server to different location on the same server:

scp username@server_ip:/path_to_the_remote_file username@server_ip:/
  path_to_destination_directory/

Copy all files in a remote directory to a local directory

scp -r username@server_ip:/path_to_source_directory/* usdername@server_ip:/
  path_to_the_destination_directory/

Copy files from one remote server to another remote server from a local machine

Currently I have to ssh into one server in order to use rsync command to copy files to another server. I can use SCP command to move files between two remote servers:

Usually I ssh into that machine and then use rsync command to perform the job, but with SCP, I can do it easily without having to log into the remote server.

Copy a single file:

scp username@server1_ip:/path_to_the_remote_file username@server2_ip:/
  path_to_destination_directory/

Copy a directory from one location on a remote server to different location on the same server:

scp username@server1_ip:/path_to_the_remote_file username@server2_ip:/
  path_to_destination_directory/

Copy all files in a remote directory to a local directory

scp -r username@server1_ip:/path_to_source_directory/* username@server2_ip:/
  path_to_the_destination_directory/

Conclusion

As you can see, once you understand how things work, it will be quite easy to move your files around. That’s what Linux is all about, just invest your time in understanding some basics, then it’s a breeze!

Ready to continue your Linux journey? Check out our free intro to Linux course!

This is a classic article written by Jack Wallen from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

Quick question: How much space do you have left on your drives? A little or a lot? Follow up question: Do you know how to find out? If you happen to use a GUI desktop (e.g., GNOME, KDE, Mate, Pantheon, etc.), the task is probably pretty simple. But what if you’re looking at a headless server, with no GUI? Do you need to install tools for the task? The answer is a resounding no. All the necessary bits are already in place to help you find out exactly how much space remains on your drives. In fact, you have two very easy-to-use options at the ready.

In this article, I’ll demonstrate these tools. I’ll be using Elementary OS, which also includes a GUI option, but we’re going to limit ourselves to the command line. The good news is these command-line tools are readily available for every Linux distribution. On my testing system, there are a number of attached drives (both internal and external). The commands used are agnostic to where a drive is plugged in; they only care that the drive is mounted and visible to the operating system.

With that said, let’s take a look at the tools.

df

The df command is the tool I first used to discover drive space on Linux, way back in the 1990s. It’s very simple in both usage and reporting. To this day, df is my go-to command for this task. This command has a few switches but, for basic reporting, you really only need one. That command is df -H. The -H switch is for human-readable format. The output of df -H will report how much space is used, available, percentage used, and the mount point of every disk attached to your system (Figure 1).

Figure 1: The output of df -H on my Elementary OS system.

What if your list of drives is exceedingly long and you just want to view the space used on a single drive? With df, that is possible. Let’s take a look at how much space has been used up on our primary drive, located at /dev/sda1. To do that, issue the command:

df -H /dev/sda1

The output will be limited to that one drive (Figure 2).

Figure 2: How much space is on one particular drive?

You can also limit the reported fields shown in the df output. Available fields are:

  • source — the file system source

  • size — total number of blocks

  • used — spaced used on a drive

  • avail — space available on a drive

  • pcent — percent of used space, divided by total size

  • target — mount point of a drive

Let’s display the output of all our drives, showing only the size, used, and avail (or availability) fields. The command for this would be:

df -H --output=size,used,avail

The output of this command is quite easy to read (Figure 3).

Figure 3: Specifying what output to display for our drives.

The only caveat here is that we don’t know the source of the output, so we’d want to include source like so:

df -H --output=source,size,used,avail

Now the output makes more sense (Figure 4).

Figure 4: We now know the source of our disk usage.

du

Our next command is du. As you might expect, that stands for disk usage. The du command is quite different to the df command, in that it reports on directories and not drives. Because of this, you’ll want to know the names of directories to be checked. Let’s say I have a directory containing virtual machine files on my machine. That directory is /media/jack/HALEY/VIRTUALBOX. If I want to find out how much space is used by that particular directory, I’d issue the command:

du -h /media/jack/HALEY/VIRTUALBOX

The output of the above command will display the size of every file in the directory (Figure 5).

Figure 5: The output of the du command on a specific directory.

So far, this command isn’t all that helpful. What if we want to know the total usage of a particular directory? Fortunately, du can handle that task. On the same directory, the command would be:

du -sh /media/jack/HALEY/VIRTUALBOX/

Now we know how much total space the files are using up in that directory (Figure 6).

Figure 6: My virtual machine files are using 559GB of space.

You can also use this command to see how much space is being used on all child directories of a parent, like so:

du -h /media/jack/HALEY

The output of this command (Figure 7) is a good way to find out what subdirectories are hogging up space on a drive.

Figure 7: How much space are my subdirectories using?

The du command is also a great tool to use in order to see a list of directories that are using the most disk space on your system. The way to do this is by piping the output of du to two other commands: sort and head. The command to find out the top 10 directories eating space on a drive would look something like this:

du -a /media/jack | sort -n -r | head -n 10

The output would list out those directories, from largest to least offender (Figure 8).

Figure 8: Our top ten directories using up space on a drive.

Not as hard as you thought

Finding out how much space is being used on your Linux-attached drives is quite simple. As long as your drives are mounted to the Linux system, both df and du will do an outstanding job of reporting the necessary information. With df you can quickly see an overview of how much space is used on a disk and with du you can discover how much space is being used by specific directories. These two tools in combination should be considered must-know for every Linux administrator.

And, in case you missed it, I recently showed how to determine your memory usage on Linux. Together, these tips will go a long way toward helping you successfully manage your Linux servers.

This is a classic article written by Jack Wallen from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

Although there are already a lot of good security features built into Linux-based systems, one very important potential vulnerability can exist when local access is granted – – that is file permission-based issues resulting from a user not assigning the correct permissions to files and directories. So based upon the need for proper permissions, I will go over the ways to assign permissions and show you some examples where modification may be necessary.

Permission Groups

Each file and directory has three user based permission groups:

  • owner – The Owner permissions apply only to the owner of the file or directory, they will not impact the actions of other users.
  • group – The Group permissions apply only to the group that has been assigned to the file or directory, they will not affect the actions of other users.
  • all users – The All Users permissions apply to all other users on the system, this is the permission group that you want to watch the most.

Permission Types

Each file or directory has three basic permission types:

  • read – The Read permission refers to a user’s capability to read the contents of the file.
  • write – The Write permissions refer to a user’s capability to write or modify a file or directory.
  • execute – The Execute permission affects a user’s capability to execute a file or view the contents of a directory.

Viewing the Permissions

You can view the permissions by checking the file or directory permissions in your favorite GUI File Manager (which I will not cover here) or by reviewing the output of the “ls -l” command while in the terminal and while working in the directory which contains the file or folder.

The permission in the command line is displayed as: _rwxrwxrwx 1 owner:group

  1. User rights/Permissions
    1. The first character that I marked with an underscore is the special permission flag that can vary.
    2. The following set of three characters (rwx) is for the owner permissions.
    3. The second set of three characters (rwx) is for the Group permissions.
    4. The third set of three characters (rwx) is for the All Users permissions.
  2. Following that grouping since the integer/number displays the number of hardlinks to the file.
  3. The last piece is the Owner and Group assignment formatted as Owner:Group.

Modifying the Permissions

When in the command line, the permissions are edited by using the command chmod. You can assign the permissions explicitly or by using a binary reference as described below.

Explicitly Defining Permissions

To explicitly define permissions you will need to reference the Permission Group and Permission Types.

The Permission Groups used are:

  • u – Owner
  • g – Group
  • o – Others
  • a – All users

The potential Assignment Operators are + (plus) and – (minus); these are used to tell the system whether to add or remove the specific permissions.

The Permission Types that are used are:

  • r – Read
  • w – Write
  • x – Execute

So for example, let’s say I have a file named file1 that currently has the permissions set to _rw_rw_rw, which means that the owner, group, and all users have read and write permission. Now we want to remove the read and write permissions from the all users group.

To make this modification you would invoke the command: chmod a-rw file1
To add the permissions above you would invoke the command: chmod a+rw file1

As you can see, if you want to grant those permissions you would change the minus character to a plus to add those permissions.

Using Binary References to Set permissions

Now that you understand the permissions groups and types this one should feel natural. To set the permission using binary references you must first understand that the input is done by entering three integers/numbers.

A sample permission string would be chmod 640 file1, which means that the owner has read and write permissions, the group has read permissions, and all other user have no rights to the file.

The first number represents the Owner permission; the second represents the Group permissions; and the last number represents the permissions for all other users. The numbers are a binary representation of the rwx string.

  • r = 4
  • w = 2
  • x = 1

You add the numbers to get the integer/number representing the permissions you wish to set. You will need to include the binary permissions for each of the three permission groups.

So to set a file to permissions on file1 to read _rwxr_____, you would enter chmod 740 file1.

Owners and Groups

I have made several references to Owners and Groups above, but have not yet told you how to assign or change the Owner and Group assigned to a file or directory.

You use the chown command to change owner and group assignments, the syntax is simple

chown owner:group filename,

so to change the owner of file1 to user1 and the group to family you would enter chown user1:family file1.

Advanced Permissions

The special permissions flag can be marked with any of the following:

  • _ – no special permissions
  • d – directory
  • l– The file or directory is a symbolic link
  • s – This indicated the setuid/setgid permissions. This is not set displayed in the special permission part of the permissions display, but is represented as a s in the read portion of the owner or group permissions.
  • t – This indicates the sticky bit permissions. This is not set displayed in the special permission part of the permissions display, but is represented as a t in the executable portion of the all users permissions

Setuid/Setgid Special Permissions

The setuid/setguid permissions are used to tell the system to run an executable as the owner with the owner’s permissions.

Be careful using setuid/setgid bits in permissions. If you incorrectly assign permissions to a file owned by root with the setuid/setgid bit set, then you can open your system to intrusion.

You can only assign the setuid/setgid bit by explicitly defining permissions. The character for the setuid/setguid bit is s.

So do set the setuid/setguid bit on file2.sh you would issue the command chmod g+s file2.sh.

Sticky Bit Special Permissions

The sticky bit can be very useful in shared environment because when it has been assigned to the permissions on a directory it sets it so only file owner can rename or delete the said file.

You can only assign the sticky bit by explicitly defining permissions. The character for the sticky bit is t.

To set the sticky bit on a directory named dir1 you would issue the command chmod +t dir1.

When Permissions Are Important

To some users of Mac- or Windows-based computers, you don’t think about permissions, but those environments don’t focus so aggressively on user-based rights on files unless you are in a corporate environment. But now you are running a Linux-based system and permission-based security is simplified and can be easily used to restrict access as you please.

So I will show you some documents and folders that you want to focus on and show you how the optimal permissions should be set.

  • home directories– The users’ home directories are important because you do not want other users to be able to view and modify the files in another user’s documents of desktop. To remedy this you will want the directory to have the drwx______ (700) permissions, so lets say we want to enforce the correct permissions on the user user1’s home directory that can be done by issuing the command chmod 700 /home/user1.
  • bootloader configuration files– If you decide to implement password to boot specific operating systems then you will want to remove read and write permissions from the configuration file from all users but root. To do you can change the permissions of the file to 700.
  • system and daemon configuration files– It is very important to restrict rights to system and daemon configuration files to restrict users from editing the contents, it may not be advisable to restrict read permissions, but restricting write permissions is a must. In these cases it may be best to modify the rights to 644.
  • firewall scripts – It may not always be necessary to block all users from reading the firewall file, but it is advisable to restrict the users from writing to the file. In this case the firewall script is run by the root user automatically on boot, so all other users need no rights, so you can assign the 700 permissions.

Other examples can be given, but this article is already very lengthy, so if you want to share other examples of needed restrictions please do so in the comments.

This is a classic article written by Jack Wallen from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

There are certain tasks that are done so often, users take for granted just how simple they are. But then, you migrate to a new platform and those same simple tasks begin to require a small portion of your brain’s power to complete. One such task is moving files from one location to another. Sure, it’s most often considered one of the more rudimentary actions to be done on a computer. When you move to the Linux platform, however, you may find yourself asking “Now, how do I move files?”

If you’re familiar with Linux, you know there are always many routes to the same success. Moving files is no exception. You can opt for the power of the command line or the simplicity of the GUI – either way, you will get those files moved.

Let’s examine just how you can move those files about. First, we’ll examine the command line.

Command line moving

One of the issues so many users new to Linux face is the idea of having to use the command line. It can be somewhat daunting at first. Although modern Linux interfaces can help to ensure you rarely have to use this “old school” tool, there is a great deal of power you would be missing if you ignored it altogether. The command for moving files is a perfect illustration of this.

The command to move files is mv. It’s very simple and one of the first commands you will learn on the platform. Instead of just listing out the syntax and the usual switches for the command – and then allowing you to do the rest – let’s walk through how you can make use of this tool.

The mv command does one thing – it moves a file from one location to another. This can be somewhat misleading because mv is also used to rename files. How? Simple. Here’s an example. Say you have the file testfile in /home/jack/ and you want to rename it to testfile2 (while keeping it in the same location). To do this, you would use the mv command like so:

mv /home/jack/testfile /home/jack/testfile2

or, if you’re already within /home/jack:

mv testfile testfile2

The above commands would move /home/jack/testfile to /home/jack/testfile2 – effectively renaming the file. But what if you simply wanted to move the file? Say you want to keep your home directory (in this case /home/jack) free from stray files. You could move that testfile into /home/jack/Documents with the command:

mv /home/jack/testfile /home/jack/Documents/

With the above command, you have relocated the file into a new location, while retaining the original file name.

What if you have a number of files you want to move? Luckily, you don’t have to issue the mv command for every file. You can use wildcards to help you out. Here’s an example:

You have a number of .mp3 files in your ~/Downloads directory (~/ – is an easy way to represent your home directory – in our earlier example, that would be /home/jack/) and you want them in ~/Music. You could quickly move them with a single command, like so:

mv ~/Downloads/*.mp3 ~/Music/

That command would move every file that ended in .mp3 from the Downloads directory, and move them into the Music directory.

Should you want to move a file into the parent directory of the current working directory, there’s an easy way to do that. Say you have the file testfile located in ~/Downloads and you want it in your home directory. If you are currently in the ~/Downloads directory, you can move it up one folder (to ~/) like so:

mv testfile ../ 

The “../” means to move the folder up one level. If you’re buried deeper, say ~/Downloads/today/, you can still easily move that file with:

mv testfile ../../

Just remember, each “../” represents one level up.

As you can see, moving files from the command line isn’t difficult at all.

GUI

There are a lot of GUIs available for the Linux platform. On top of that, there are a lot of file managers you can use. The most popular file managers are Nautilus (GNOME) and Dolphin (KDE). Both are very powerful and flexible. I want to illustrate how files are moved using the Nautilus file manager.

Nautilus has probably the most efficient means of moving files about. Here’s how it’s done:

  1. Open up the Nautilus file manager.
  2. Locate the file you want to move and right-click said file.
  3. From the pop-up menu (Figure 1) select the “Move To” option.
  4. When the Select Destination window opens, navigate to the new location for the file.
  5. Once you’ve located the destination folder, click Select.
Nautilus screenshot

This context menu also allows you to copy the file to a new location, move the file to the Trash, and more.

If you’re more of a drag and drop kind of person, fear not – Nautilus is ready to serve. Let’s say you have a file in your home directory and you want to drag it to Documents. By default, Nautilus will have a few bookmarks in the left pane of the window. You can drag the file into the Document bookmark without having to open a second Nautilus window. Simply click, hold, and drag the file from the main viewing pane to the Documents bookmark.

If, however, the destination for that file is not listed in your bookmarks (or doesn’t appear in the current main viewing pane), you’ll need to open up a second Nautilus window. Side by side, you can then drag the file from the source folder in the original window to the destination folder in the second window.

If you need to move multiple files, you’re still in luck. Similar to nearly every modern user interface, you can do a multi-select of files by holding down the Ctrl button as you click each file. After you have selected each file (Figure 2), you can either right-click one of the selected files and then choose the Move To option, or just drag and drop them into a new location.

nautilus

The selected files (in this case, folders) will each be highlighted.

Moving files on the Linux desktop is incredibly easy. Either with the command line or your desktop of choice, you have numerous routes to success – all of which are user-friendly and quick to master.