How to Install Apache Maven on Debian 9

Apache Maven is an open source project management and comprehension tool used primarily for Java projects. Maven uses a Project Object Model (POM) which is essentially a XML file containing information about the project, configuration details, the project’s dependencies, and so on.

In this tutorial we will show you two different ways to install Apache Maven on Debian 9.

The official Debian repositories contains Maven packages that can be installed with the apt package manager. This is the easiest way to install Maven on Debian, however the version included in the repositories is always several releases behind the latest version of Maven.

To install the latest version of Maven follow the instructions provided in second part of this article where we will be downloading Maven from their official website.

Choose one of the installation methods that will work best for you.

Prerequisites

In order to be able to install packages on your Debian system, you must be logged in as a user with sudo privileges.

Installing Apache Maven on Debian with Apt

Installing Maven on Debian using apt is a simple, straightforward process.

  1. First, update the package index:

  2. Install Maven by running the following command:

  3. Verify the installation by typing:

    The output should look something like this:

    Apache Maven 3.3.9
    Maven home: /usr/share/maven
    Java version: 1.8.0_181, vendor: Oracle Corporation
    Java home: /usr/lib/jvm/java-8-openjdk-amd64/jre
    Default locale: en_US, platform encoding: UTF-8
    OS name: “linux”, version: “4.9.0-8-amd64”, arch: “amd64”, family: “unix”

That’s it. Maven is now installed on your Debian system.

Install the Latest Release of Apache Maven

The following sections provide detailed information for installing the latest Apache Maven version on Debian 9. We will download the latest release of Apache Maven from their official website.

1. Install OpenJDK

Maven 3.3+ require JDK 1.7 or above to be installed on your system. We’ll install OpenJDK, which is the default Java development and runtime in Debian 9.

Start by updating the package index:

Install the OpenJDK package by typing:

sudo apt install default-jdk

Verify the Java installation by checking its version:

The output should look something like this:

openjdk version “1.8.0_181”
OpenJDK Runtime Environment (build 1.8.0_181-8u181-b13-2~deb9u1-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)

2. Download Apache Maven

At the time of writing this article, the latest version of Apache Maven is 3.6.0. Before continuing with the next step you should check the Maven download page to see if a newer version is available.

Download the Apache Maven archive in the /tmp directory using the following wget command:

wget https://www-us.apache.org/dist/maven/maven-3/3.6.0/binaries/apache-maven-3.6.0-bin.tar.gz -P /tmp

Once the download is completed, extract the archive in the /opt directory:

sudo tar xf /tmp/apache-maven-*.tar.gz -C /opt

To have more control over Maven versions and updates, we will create a symbolic link maven which will point to the Maven installation directory:

sudo ln -s /opt/apache-maven-3.6.0 /opt/maven

Later if you want to upgrade your Maven installation you can simply unpack the newer version and change the symlink to point to the latest version.

3. Setup environment variables

Next, we’ll need to setup the environment variables. To do so open your text editor and create a new file named mavenenv.sh inside of the /etc/profile.d/ directory.

sudo nano /etc/profile.d/maven.sh

Paste the following configuration:

/etc/profile.d/maven.sh

export JAVA_HOME=/usr/lib/jvm/default-java
export M2_HOME=/opt/maven
export MAVEN_HOME=/opt/maven
export PATH=$/bin:$

Save and close the file. This script will be sourced at shell startup.

Make the script executable by typing:

sudo chmod +x /etc/profile.d/maven.sh

Finally load the environment variables using the following command:

source /etc/profile.d/maven.sh

4. Verify the installation

To validate that Maven is installed properly use the mvn -version command which will print the Maven version:

You should see something like the following:

Apache Maven 3.6.0 (97c98ec64a1fdfee7767ce5ffb20918da4f719f3; 2018-10-24T18:41:47Z)
Maven home: /opt/maven
Java version: 1.8.0_181, vendor: Oracle Corporation, runtime: /usr/lib/jvm/java-8-openjdk-amd64/jre
Default locale: en_US, platform encoding: UTF-8
OS name: “linux”, version: “4.9.0-8-amd64”, arch: “amd64”, family: “unix”

That’s it. The latest version of Maven is now installed on your Debian system.

Conclusion

You have successfully installed Apache Maven on your Debian 9. You can now visit the official Apache Maven Documentation page and learn how to get started with Maven.

If you hit a problem or have feedback, leave a comment below.

Source

Germany Dedicated Server

How to Find openSUSE Linux Version

by Aaron Kili | Published: April 16, 2019 |
Last Updated: April 16, 2019

‘,
enableHover: false,
enableTracking: true,
buttons: { twitter: },
click: function(api, options){
api.simulateClick();
api.openPopup(‘twitter’);
}
});
jQuery(‘#facebook’).sharrre({
share: {
facebook: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
click: function(api, options){
api.simulateClick();
api.openPopup(‘facebook’);
}
});
jQuery(‘#googleplus’).sharrre({
share: {
googlePlus: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
urlCurl: ‘https://www.tecmint.com/wp-content/themes/tecmint/js/sharrre.php’,
click: function(api, options){
api.simulateClick();
api.openPopup(‘googlePlus’);
}
});
jQuery(‘#linkedin’).sharrre({
share: {
linkedin: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
buttons: {
linkedin: {
description: ‘How to Find openSUSE Linux Version’,media: ‘https://www.tecmint.com/wp-content/uploads/2019/04/Find-OpenSUSE-Version.png’ }
},
click: function(api, options){
api.simulateClick();
api.openPopup(‘linkedin’);
}
});
// Scrollable sharrre bar, contributed by Erik Frye. Awesome!
var shareContainer = jQuery(“.sharrre-container”),
header = jQuery(‘#header’),
postEntry = jQuery(‘.entry’),
$window = jQuery(window),
distanceFromTop = 20,
startSharePosition = shareContainer.offset(),
contentBottom = postEntry.offset().top + postEntry.outerHeight(),
topOfTemplate = header.offset().top;
getTopSpacing();
shareScroll = function(){
if($window.width() > 719){
var scrollTop = $window.scrollTop() + topOfTemplate,
stopLocation = contentBottom – (shareContainer.outerHeight() + topSpacing);
if(scrollTop > stopLocation){
shareContainer.offset();
}
else if(scrollTop >= postEntry.offset().top-topSpacing){
shareContainer.offset();
}else if(scrollTop 1024)
topSpacing = distanceFromTop + jQuery(‘.nav-wrap’).outerHeight();
else
topSpacing = distanceFromTop;
}
});

If You Appreciate What We Do Here On TecMint, You Should Consider:

  1. Stay Connected to: Twitter | Facebook | Google Plus
  2. Subscribe to our email updates: Sign Up Now
  3. Get your own self-hosted blog with a Free Domain at ($3.45/month).
  4. Become a Supporter – Make a contribution via PayPal
  5. Support us by purchasing our premium books in PDF format.
  6. Support us by taking our online Linux courses

We are thankful for your never ending support.

View all Posts

Aaron Kili

Aaron Kili is a Linux and F.O.S.S enthusiast, an upcoming Linux SysAdmin, web developer, and currently a content creator for TecMint who loves working with computers and strongly believes in sharing knowledge.

Your name can also be listed here. Got a tip? Submit it here to become an TecMint author.

Source

Germany Dedicated Server

How to Install Laravel on Ubuntu 18.04

Laravel is an open-source PHP web application framework with expressive, elegant syntax. Laravel allows you to easily build scalable and flexible web applications, restful APIs and eCommerce solutions.

With build-in features such as routing, authentication, sessions, caching and unit testing Laravel is a framework of choice for many PHP developers.

In this tutorial we will show you how to install Laravel on an Ubuntu 18.04 system. The same instructions apply for Ubuntu 16.04 and any Ubuntu based distribution, including Linux Mint, Kubuntu and Elementary OS.

Prerequisites

Before continuing with this tutorial, make sure you are logged in as a user with sudo privileges.

Update the system packages to the latest versions:

sudo apt update && sudo apt upgrade

Installing PHP

PHP 7.2 which is the default PHP version in Ubuntu 18.04 is fully supported and recommended for Laravel 5.7.

Run the following command to install PHP and all required PHP modules:

sudo apt install php7.2-common php7.2-cli php7.2-gd php7.2-mysql php7.2-curl php7.2-intl php7.2-mbstring php7.2-bcmath php7.2-imap php7.2-xml php7.2-zip

Installing Composer

Composer is a dependency manager for PHP and we will be using it to download the Laravel core and install all necessary Laravel components.

To install composer globally, download the Composer installer with curl and move the file to the /usr/local/bin directory:

curl -sS https://getcomposer.org/installer | sudo php — –install-dir=/usr/local/bin –filename=composer

Verify the installation by printing the composer version:

The output should look something like this:

Composer version 1.8.0 2018-12-03 10:31:16

Installing Laravel

At the time of writing this article, the latest stable version of Laravel is version 5.7.

Run the Composer create-project command to install Laravel in the my_app directory:

composer create-project –prefer-dist laravel/laravel my_app

The command above will fetch all required php packages. The process may take few minutes and if it is successful the end of the output should look like the following:

Package manifest generated successfully.
> @php artisan key:generate –ansi
Application key set successfully.

At this point you have Laravel installed on your Ubuntu system.

When installed via Composer, Laravel will automatically create a file named .env. This files includes custom configuration variables including the database credentials. You can read more about how to configure Laravel here.

You can start the development server by navigating to the Laravel project directory and executing the artisan serve command:

cd ~/my_app
php artisan serve

The output will look something like this:

Laravel development server started: <http://127.0.0.1:8000>

Laravel can use SQLite, PostgreSQL, MongoDB or MySQL/MariaDB database to store all its data.

If you want to use Laravel Mix to compile assets you’ll need to install Node.js and Yarn.

Verifying the Installation

Open your browser, type http://127.0.0.1:8000 and assuming the installation is successful, a screen similar to the following will appear:

Conclusion

Congratulations, you have successfully installed Laravel 5.7 on your Ubuntu 18.04 machine. You can now start developing your application.

If you have questions feel free to leave a comment below.

Source

Germany Dedicated Server

How to organize with Calculist: Ideas, events, and more

Thoughts. Ideas. Plans. We all have a few of them. Often, more than a few. And all of us want to make some or all of them a reality.

Far too often, however, those thoughts and ideas and plans are a jumble inside our heads. They refuse to take a discernable shape, preferring instead to rattle around here, there, and everywhere in our brains.

One solution to that problem is to put everything into an outline. An outline can be a great way to organize what you need to organize and give it the shape you need to take it to the next step.

A number of people I know rely on a popular web-based tool called WorkFlowy for their outlining needs. If you prefer your applications (including web ones) to be open source, you’ll want to take a look at Calculist.

The brainchild of Dan Allison, Calculist is billed as the thinking tool for problem solvers. It does much of what WorkFlowy does, and it has a few features that its rival is missing.

Let’s take a look at using Calculist to organize your ideas (and more).

Getting started

If you have a server, you can try to install Calculist on it. If, like me, you don’t have server or just don’t have the technical chops, you can turn to the hosted version of Calculist.

Sign up for a no-cost account, then log in. Once you’ve done that, you’re ready to go.

Creating a basic outline

What you use Calculist for really depends on your needs. I use Calculist to create outlines for articles and essays, to create lists of various sorts, and to plan projects. Regardless of what I’m doing, every outline I create follows the same pattern.

To get started, click the New List button. This creates a blank outline (which Calculist calls a list).

The outline is a blank slate waiting for you to fill it up. Give the outline a name, then press Enter. When you do that, Calculist adds the first blank line for your outline. Use that as your starting point.

Add a new line by pressing Enter. To indent a line, press the Tab key while on that line. If you need to create a hierarchy, you can indent lines as far as you need to indent them. Press Shift+Tab to outdent a line.

Keep adding lines until you have a completed outline. Calculist saves your work every few seconds, so you don’t need to worry about that.

Editing an outline

Outlines are fluid. They morph. They grow and shrink. Individual items in an outline change. Calculist makes it easy for you to adapt and make those changes.

You already know how to add an item to an outline. If you don’t, go back a few paragraphs for a refresher. To edit text, click on an item and start typing. Don’t double-click (more on this in a few moments). If you accidentally double-click on an item, press Esc on your keyboard and all will be well.

Sometimes you need to move an item somewhere else in the outline. Do that by clicking and holding the bullet for that item. Drag the item and drop it wherever you want it. Anything indented below the item moves with it.

At the moment, Calculist doesn’t support adding notes or comments to an item in an outline. A simple workaround I use is to add a line indented one level deeper than the item where I want to add the note. That’s not the most elegant solution, but it works.

Let your keyboard do the walking

Not everyone likes to use their mouse to perform actions in an application. Like a good desktop application, you’re not at the mercy of your mouse when you use Calculist. It has many keyboard shortcuts that you can use to move around your outlines and manipulate them.

The keyboard shortcuts I mentioned a few paragraphs ago are just the beginning. There are a couple of dozen keyboard shortcuts that you can use.

For example, you can focus on a single portion of an outline by pressing Ctrl+Right Arrow key. To get back to the full outline, press Ctrl+Left Arrow key. There are also shortcuts for moving up and down in your outline, expanding and collapsing lists, and deleting items.

You can view the list of shortcuts by clicking on your user name in the upper-right corner of the Calculist window and clicking Preferences. You can also find a list of keyboard shortcuts in the Calculist GitHub repository.

If you need or want to, you can change the shortcuts on the Preferences page. Click on the shortcut you want to change—you can, for example, change the shortcut for zooming in on an item to Ctrl+0.

The power of commands

Calculist’s keyboard shortcuts are useful, but they’re only the beginning. The application has command mode that enables you to perform basic actions and do some interesting and complex tasks.

To use a command, double-click an item in your outline or press Ctrl+Enter while on it. The item turns black. Type a letter or two, and a list of commands displays. Scroll down to find the command you want to use, then press Enter. There’s also a list of commands in the Calculist GitHub repository.

The commands are quite comprehensive. While in command mode, you can, for example, delete an item in an outline or delete an entire outline. You can import or export outlines, sort and group items in an outline, or change the application’s theme or font.

Final thoughts

I’ve found that Calculist is a quick, easy, and flexible way to create and view outlines. It works equally well on my laptop and my phone, and it packs not only the features I regularly use but many others (including support for LaTeX math expressions and a table/spreadsheet mode) that more advanced users will find useful.

That said, Calculist isn’t for everyone. If you prefer your outlines on the desktop, then check out TreeLine, Leo, or Emacs org-mode.

Source

Germany Dedicated Server

Linux Time Command | Linuxize

The time command is used to determine how long a given command takes to run. It is useful for testing the performance of your scripts and commands.

For example, if you have two different scripts doing the same job and you want to know which one performs better you can use the Linux time command to determine the duration of execution of each script.

Time Command Versions

Both Bash and Zsh, the most widely used Linux shells have their own built-in versions of the time command which take precedence over the Gnu time command.

You can use the type command to determine whether time is a binary or a built-in keyword.

# Bash
time is a shell keyword

# Zsh
time is a reserved word

# GNU time (sh)
time is /usr/bin/time

To use the Gnu time command, you need to specify the full path to the time binary, usually /usr/bin/time, use the env command or use a leading backslash time which prevents both and built-ins from being used.

The Gnu time allows you to format the output and provides other useful information like memory I/O and IPC calls.

Using Linux Time Command

In the following example, we are going to measure the time taken to download the Linux kernel using the wget tool:

time wget https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.19.9.tar.xz

What will be printed as an output depends on the version of the time command you’re using:

# Bash
real 0m33.961s
user 0m0.340s
sys 0m0.940s

# Zsh
0.34s user 0.94s system 4% cpu 33.961 total

# GNU time (sh)
0.34user 0.94system 0:33.96elapsed 4%CPU (0avgtext+0avgdata 6060maxresident)k
0inputs+201456outputs (0major+315minor)pagefaults 0swaps

  • real or total or elapsed (wall clock time) is the time from start to finish of the call. It is the time from the moment you hit the Enter key until the moment the wget command is completed.
  • user – amount of CPU time spent in user mode.
  • system or sys – amount of CPU time spent in kernel mode.

Conclusion

By now you should have a good understanding of how to use the time command. If you want to learn more about the Gnu time command visit the time man page.

Source

Germany Dedicated Server

How to Delete all Text in a File Using Vi/Vim Editor

by Aaron Kili | Published: April 17, 2019 |
Last Updated: April 17, 2019

‘,
enableHover: false,
enableTracking: true,
buttons: { twitter: },
click: function(api, options){
api.simulateClick();
api.openPopup(‘twitter’);
}
});
jQuery(‘#facebook’).sharrre({
share: {
facebook: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
click: function(api, options){
api.simulateClick();
api.openPopup(‘facebook’);
}
});
jQuery(‘#googleplus’).sharrre({
share: {
googlePlus: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
urlCurl: ‘https://www.tecmint.com/wp-content/themes/tecmint/js/sharrre.php’,
click: function(api, options){
api.simulateClick();
api.openPopup(‘googlePlus’);
}
});
jQuery(‘#linkedin’).sharrre({
share: {
linkedin: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
buttons: {
linkedin: {
description: ‘How to Delete all Text in a File Using Vi/Vim Editor’,media: ‘https://www.tecmint.com/wp-content/uploads/2019/04/Delete-All-Text-in-Vi-Editor.png’ }
},
click: function(api, options){
api.simulateClick();
api.openPopup(‘linkedin’);
}
});
// Scrollable sharrre bar, contributed by Erik Frye. Awesome!
var shareContainer = jQuery(“.sharrre-container”),
header = jQuery(‘#header’),
postEntry = jQuery(‘.entry’),
$window = jQuery(window),
distanceFromTop = 20,
startSharePosition = shareContainer.offset(),
contentBottom = postEntry.offset().top + postEntry.outerHeight(),
topOfTemplate = header.offset().top;
getTopSpacing();
shareScroll = function(){
if($window.width() > 719){
var scrollTop = $window.scrollTop() + topOfTemplate,
stopLocation = contentBottom – (shareContainer.outerHeight() + topSpacing);
if(scrollTop > stopLocation){
shareContainer.offset();
}
else if(scrollTop >= postEntry.offset().top-topSpacing){
shareContainer.offset();
}else if(scrollTop 1024)
topSpacing = distanceFromTop + jQuery(‘.nav-wrap’).outerHeight();
else
topSpacing = distanceFromTop;
}
});

If You Appreciate What We Do Here On TecMint, You Should Consider:

  1. Stay Connected to: Twitter | Facebook | Google Plus
  2. Subscribe to our email updates: Sign Up Now
  3. Get your own self-hosted blog with a Free Domain at ($3.45/month).
  4. Become a Supporter – Make a contribution via PayPal
  5. Support us by purchasing our premium books in PDF format.
  6. Support us by taking our online Linux courses

We are thankful for your never ending support.

View all Posts

Aaron Kili

Aaron Kili is a Linux and F.O.S.S enthusiast, an upcoming Linux SysAdmin, web developer, and currently a content creator for TecMint who loves working with computers and strongly believes in sharing knowledge.

Your name can also be listed here. Got a tip? Submit it here to become an TecMint author.

Source

Germany Dedicated Server

How to manage your Linux environment

Linux user environments help you find the command you need and get a lot done without needing details about how the system is configured. Where the settings come from and how they can be modified is another matter.

“);
});
try {
$(“div.lazyload_blox_ad”).lazyLoadAd({
threshold : 0, // You can set threshold on how close to the edge ad should come before it is loaded. Default is 0 (when it is visible).
forceLoad : false, // Ad is loaded even if not visible. Default is false.
onLoad : false, // Callback function on call ad loading
onComplete : false, // Callback function when load is loaded
timeout : 1500, // Timeout ad load
debug : false, // For debug use : draw colors border depends on load status
xray : false // For debug use : display a complete page view with ad placements
}) ;
}
catch (exception){
console.log(“error loading lazyload_ad ” + exception);
}
});

The configuration of your user account on a Linux system simplifies your use of the system in a multitude of ways. You can run commands without knowing where they’re located. You can reuse previously run commands without worrying how the system is keeping track of them. You can look at your email, view man pages, and get back to your home directory easily no matter where you might have wandered off to in the file system. And, when needed, you can tweak your account settings so that it works even more to your liking.

Linux environment settings come from a series of files — some are system-wide (meaning they affect all user accounts) and some are configured in files that are sitting in your home directory. The system-wide settings take effect when you log in and local ones take effect right afterwards, so the changes that you make in your account will override system-wide settings. For bash users, these files include these system files:

/etc/environment
/etc/bash.bashrc
/etc/profile

And some of these local files:

~/.bashrc
~/.profile — not read if ~/.bash_profile or ~/.bash_login
~/.bash_profile
~/.bash_login

You can modify any of the local four that exist, since they sit in your home directory and belong to you.

Viewing your Linux environment settings

To view your environment settings, use the env command. Your output will likely look similar to this:

$ env
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;
01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:
*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:
*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:
*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;
31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:
*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:
*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:
*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:
*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:
*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:
*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:
*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:
*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:
*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:
*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:
*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:
*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.spf=00;36:
SSH_CONNECTION=192.168.0.21 34975 192.168.0.11 22
LESSCLOSE=/usr/bin/lesspipe %s %s
LANG=en_US.UTF-8
OLDPWD=/home/shs
XDG_SESSION_ID=2253
USER=shs
PWD=/home/shs
HOME=/home/shs
SSH_CLIENT=192.168.0.21 34975 22
XDG_DATA_DIRS=/usr/local/share:/usr/share:/var/lib/snapd/desktop
SSH_TTY=/dev/pts/0
MAIL=/var/mail/shs
TERM=xterm
SHELL=/bin/bash
SHLVL=1
LOGNAME=shs
DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus
XDG_RUNTIME_DIR=/run/user/1000
PATH=/home/shs/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
LESSOPEN=| /usr/bin/lesspipe %s
_=/usr/bin/env

While you’re likely to get a lot of output, the first big section shown above deals with the colors that are used on the command line to identify various file types. When you see something like *.tar=01;31:, this tells you that tar files will be displayed in a file listing in red, while *.jpg=01;35: tells you that jpg files will show up in purple. These colors are meant to make it easy to pick out certain files from a file listing. You can learn more about these colors are defined and how to customize them at Customizing your colors on the Linux command line.

One easy way to turn colors off when you prefer a simpler display is to use a command such as this one:

$ ls -l –color=never

That command could easily be turned into an alias:

$ alias ll2=’ls -l –color=never’

You can also display individual settings using the echo command. In this command, we display the number of commands that will be remembered in our history buffer:

$ echo $HISTSIZE
1000

Your last location in the file system will be remembered if you’ve moved.

PWD=/home/shs
OLDPWD=/tmp

Making changes

You can make changes to environment settings with a command like this, but add a line lsuch as “HISTSIZE=1234” in your ~/.bashrc file if you want to retain this setting.

$ export HISTSIZE=1234

What it means to “export” a variable

Exporting a variable makes the setting available to your shell and possible subshells. By default, user-defined variables are local and are not exported to new processes such as subshells and scripts. The export command makes variables available to functions to child processes.

Adding and removing variables

You can create new variables and make them available to you on the command line and subshells quite easily. However, these variables will not survive your logging out and then back in again unless you also add them to ~/.bashrc or a similar file.

$ export MSG=”Hello, World!”

You can unset a variable if you need by using the unset command:

$ unset MSG

If the variable is defined locally, you can easily set it back up by sourcing your startup file(s). For example:

$ echo $MSG
Hello, World!
$ unset $MSG
$ echo $MSG

$ . ~/.bashrc
$ echo $MSG
Hello, World!

Wrap-up

User accounts are set up with an appropriate set of startup files for creating a userful user environment, but both individual users and sysadmins can change the default settings by editing their personal setup files (users) or the files from which many of the settings originate (sysadmins).

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

Sandra Henry-Stocker has been administering Unix systems for more than 30 years. She describes herself as “USL” (Unix as a second language) but remembers enough English to write books and buy groceries. She lives in the mountains in Virginia where, when not working with or writing about Unix, she’s chasing the bears away from her bird feeders.

Source

Germany Dedicated Server

Continuous response: The essential process we’re ignoring in DevOps

Continuous response (CR) is an overlooked link in the DevOps process chain. The two other major links—continuous integration (CI) and continuous delivery (CD)—are well understood, but CR is not. Yet, CR is the essential element of follow-through required to make customers happy and fulfill the promise of greater speed and agility. At the heart of the DevOps movement is the need for greater velocity and agility to bring businesses into our new digital age. CR plays a pivotal role in enabling this.

Defining CR

We need a crisp definition of CR to move forward with breaking it down. To put it into context, let’s revisit the definitions of continuous integration (CI) and continuous delivery (CD). Here are Gartner’s definitions when I wrote this them down in 2017:

Continuous integration is the practice of integrating, building, testing, and delivering functional software on a scheduled, repeatable, and automated basis.

Continuous delivery is a software engineering approach where teams keep producing valuable software in short cycles while ensuring that the software can be reliably released at any time.

I propose the following definition for CR:

Continuous response is a practice where developers and operators instrument, measure, observe, and manage their deployed software looking for changes in performance, resiliency, end-user behavior, and security posture and take corrective actions as necessary.

We can argue about whether these definitions are 100% correct. They are good enough for our purposes, which is framing the definition of CR in rough context so we can understand it is really just the last link in the chain of a holistic cycle.

What is this multi-colored ring, you ask? It’s the famous OODA Loop. Before continuing, let’s touch on what the OODA Loop is and why it’s relevant to DevOps. We’ll keep it brief though, as there is already a long history between the OODA Loop and DevOps.

A brief aside: The OODA Loop

At the heart of core DevOps thinking is using the OODA Loop to create a proactive process for evolving and responding to changing environments. A quick web search makes it easy to learn the long history between the OODA Loop and DevOps, but if you want the deep dive, I highly recommend The Tao of Boyd: How to Master the OODA Loop.

Here is the “evolved OODA Loop” presented by John Boyd:

The most important thing to understand about the OODA Loop is that it’s a cognitive process for adapting to and handling changing circumstances.

The second most important thing to understand about the OODA Loop is, since it is a thought process that is meant to evolve, it depends on driving feedback back into the earlier parts of the cycle as you iterate.

As you can see in the diagram above, CI, CD, and CR are all their own isolated OODA Loops within the overall DevOps OODA Loop. The key here is that each OODA Loop is an evolving thought process for how test, release, and success are measured. Simply put, those who can execute on the OODA Loop fastest will win.

Put differently, DevOps wants to drive speed (executing the OODA Loop faster) combined with agility (taking feedback and using it to constantly adjust the OODA Loop). This is why CR is a vital piece of the DevOps process. We must drive production feedback into the DevOps maturation process. The DevOps notion of Culture, Automation, Measurement, and Sharing (CAMS) partially but inadequately captures this, whereas CR provides a much cleaner continuation of CI/CD in my mind.

Breaking CR down

CR has more depth and breadth than CI or CD. This is natural, given that what we’re categorizing is the post-deployment process by which our software is taking a variety of actions from autonomic responses to analytics of customer experience. I think, when it’s broken down, there are three key buckets that CR components fall into. Each of these three areas forms a complete OODA Loop; however, the level of automation throughout the OODA Loop varies significantly.

The following table will help clarify the three areas of CR:

CR TypePurposeExamples
Real-timeAutonomics for availability and resiliencyAuto-scaling, auto-healing, developer-in-the-loop automated responses to real-time failures, automated root-cause analysis
AnalyticFeature/fix pipelineA/B testing, service response times, customer interaction models
PredictiveHistory-based planningCapacity planning, hardware failure prediction models, cost-basis analysis

Real-time CR is probably the best understood of the three. This kind of CR is where our software has been instrumented for known issues and can take an immediate, automated response (autonomics). Examples of known issues include responding to high or low demand (e.g., elastic auto-scaling), responding to expected infrastructure resource failures (e.g., auto-healing), and responding to expected distributed application failures (e.g., circuit breaker pattern). In the future, we will see machine learning (ML) and similar technologies applied to automated root-cause analysis and event correlation, which will then provide a path towards “no ops” or “zero ops” operational models.

Analytic CR is still the most manual of the CR processes. This kind of CR is focused primarily on observing end-user experience and providing feedback to the product development cycle to add features or fix existing functionality. Examples of this include traditional A/B website testing, measuring page-load times or service-response times, post-mortems of service failures, and so on.

Predictive CR, due to the resurgence of AI and ML, is one of the innovation areas in CR. It uses historical data to predict future needs. ML techniques are allowing this area to become more fully automated. Examples include automated and predictive capacity planning (primarily for the infrastructure layer), automated cost-basis analysis of service delivery, and real-time reallocation of infrastructure resources to resolve capacity and hardware failure issues before they impact the end-user experience.

Diving deeper on CR

CR, like CI or CD, is a DevOps process supported by a set of underlying tools. CI and CD are not Jenkins, unit tests, or automated deployments alone. They are a process flow. Similarly, CR is a process flow that begins with the delivery of new code via CD, which open source tools like Spinnaker give us. CR is not monitoring, machine learning, or auto-scaling, but a diverse set of processes that occur after code deployment, supported by a variety of tools. CR is also different in two specific ways.

First, it is different because, by its nature, it is broader. The general software development lifecycle (SDLC) process means that most CI/CD processes are similar. However, code running in production differs from app to app or service to service. This means that CR differs as well.

Second, CR is different because it is nascent. Like CI and CD before it, the process and tools existed before they had a name. Over time, CI/CD became more normalized and easier to scope. CR is new, hence there is lots of room to discuss what’s in or out. I welcome your comments in this regard and hope you will run with these ideas.

CR: Closing the loop on DevOps

DevOps arose because of the need for greater service delivery velocity and agility. Essentially, DevOps is an extension of agile software development practices to an operational mindset. It’s a direct response to the flexibility and automation possibilities that cloud computing affords. However, much of the thinking on DevOps to date has focused on deploying the code to production and ends there. But our jobs don’t end there. As professionals, we must also make certain our code is behaving as expected, we are learning as it runs in production, and we are taking that learning back into the product development process.

This is where CR lives and breathes. DevOps without CR is the same as saying there is no OODA Loop around the DevOps process itself. It’s like saying that operators’ and developers’ jobs end with the code being deployed. We all know this isn’t true. Customer experience is the ultimate measurement of our success. Can people use the software or service without hiccups or undue friction? If not, we need to fix it. CR is the final link in the DevOps chain that enables delivering the truest customer experience.

If you aren’t thinking about continuous response, you aren’t doing DevOps. Share your thoughts on CR, and tell me what you think about the concept and the definition.

This article is based on The Essential DevOps Process We’re Ignoring: Continuous Response, which originally appeared on the Cloudscaling blog under a CC BY 4.0 license and is republished with permission.

Source

Germany Dedicated Server

How to quickly deploy, run Linux applications as unikernels

Unikernels are a smaller, faster, and more secure option for deploying applications on cloud infrastructure. With NanoVMs OPS, anyone can run a Linux application as a unikernel with no additional coding.

“);
});
try {
$(“div.lazyload_blox_ad”).lazyLoadAd({
threshold : 0, // You can set threshold on how close to the edge ad should come before it is loaded. Default is 0 (when it is visible).
forceLoad : false, // Ad is loaded even if not visible. Default is false.
onLoad : false, // Callback function on call ad loading
onComplete : false, // Callback function when load is loaded
timeout : 1500, // Timeout ad load
debug : false, // For debug use : draw colors border depends on load status
xray : false // For debug use : display a complete page view with ad placements
}) ;
}
catch (exception){
console.log(“error loading lazyload_ad ” + exception);
}
});

Building and deploying lightweight apps is becoming an easier and more reliable process with the emergence of unikernels. While limited in functionality, unikernals offer many advantages in terms of speed and security.

What are unikernels?

A unikernel is a very specialized single-address-space machine image that is similar to the kind of cloud applications that have come to dominate so much of the internet, but they are considerably smaller and are single-purpose. They are lightweight, providing only the resources needed. They load very quickly and are considerably more secure — having a very limited attack surface. Any drivers, I/O routines and support libraries that are required are included in the single executable. The resultant virtual image can then be booted and run without anything else being present. And they will often run 10 to 20 times faster than a container.

Would-be attackers cannot drop into a shell and try to gain control because there is no shell. They can’t try to grab the system’s /etc/passwd or /etc/shadow files because these files don’t exist. Creating a unikernel is much like turning your application into its own OS. With a unikernel, the application and the OS become a single entity. You omit what you don’t need, thereby removing vulnerabilities and improving performance many times over.

In short, unikernels:

  • Provide improved security (e.g., making shell code exploits impossible)
  • Have much smaller footprints then standard cloud apps
  • Are highly optimized
  • Boot extremely quickly

Are there any downsides to unikernels?

The only serious downside to unikernels is that you have to build them. For many developers, this has been a giant step. Trimming down applications to just what is needed and then producing a tight, smoothly running application can be complex because of the application’s low-level nature. In the past, you pretty much had to have been a systems developer or a low level programmer to generate them.

How is this changing?

Just recently (March 24, 2019) NanoVMs announced a tool that loads any Linux application as a unikernel. Using NanoVMs OPS, anyone can run a Linux application as a unikernel with no additional coding. The application will also run faster, more safely and with less cost and overhead.

What is NanoVMs OPS?

NanoVMs is a unikernel tool for developers. It allows you to run all sorts of enterprise class software yet still have extremely tight control over how it works.

Other benefits associated with OPS include:

  • Developers need no prior experience or knowledge to build unikernels.
  • The tool can be used to build and run unikernels locally on a laptop.
  • No accounts need to be created and only a single download and one command is required to execute OPS.

An intro to NanoVMs is available on NanoVMs on youtube. You can also check out the company’s LinkedIn page and can read about NanoVMs security here.

Here is some information on how to get started.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

Sandra Henry-Stocker has been administering Unix systems for more than 30 years. She describes herself as “USL” (Unix as a second language) but remembers enough English to write books and buy groceries. She lives in the mountains in Virginia where, when not working with or writing about Unix, she’s chasing the bears away from her bird feeders.

Source

Germany Dedicated Server

Scheduling Cron Jobs with Crontab

Cron is a scheduling daemon that executes tasks at specified intervals. These tasks are called cron jobs and are mostly used to automate system maintenance or administration.

For example, you could set a cron job to backup your databases or data, update your system with the latest security patches, check your disk space usage, send emails and more. Some applications, such as Drupal or Magento requires cron jobs to perform certain functions.

You can schedule cron jobs to run by minute, hour, day of the month, month, day of the week or any combination of these.

What is Crontab File

Crontab (cron table) is a text file that specifies the schedule of cron jobs. There are two types of crontab files. The system-wide crontab files and individual user crontab files.

Users crontab files are stored by the user’s name and their location varies by operating systems. In Red Hat based system such as CentOS, crontab files are stored in the /var/spool/cron directory while on Debian and Ubuntu files are stored in the /var/spool/cron/crontabs directory.

Although you can edit the user crontab files manually, it is recommended to use the crontab command.

/etc/crontab and the files inside the /etc/cron.d directory are system-wide crontab files which can be edited only by the system administrators.

In most Linux distributions you can also put scripts inside the /etc/cron. directories and the scripts will be executed every hour/day/week/month.

Crontab Syntax and Operators

Each line in the user crontab file contains six fields separated by a space followed by the command to be run.

* * * * * command(s)
– – – – –
| | | | |
| | | | —– Day of week (0 – 7) (Sunday=0 or 7)
| | | ——- Month (1 – 12)
| | ——— Day of month (1 – 31)
| ———– Hour (0 – 23)
————- Minute (0 – 59)

THe first five fields may contain one or more values, separated by a comma or a range of values separated by a hyphen.

  • * -The asterisk operator means any value or always. If you have the asterisk symbol in the Hour field it means the task will be performed each hour.
  • , -The comma operator allows you to specify a list of values for repetition. For example, if you have 1,3,5 in the Hour field, the task will run at 1 am, 3 am and 5 am.
  • – -The hyphen operator allows you to specify a range of values. If you have 1-5 in the Day of week field the task will run every weekday (From Monday to Friday).
  • / -The slash operator allows you to specify values that will be repeated over a certain interval between them. For example, if you have */4 in the Hour field it means the action will be performed every four hours. It is same as specifying 0,4,8,12,16,20. Instead of asterisk before the slash operator you can also use a range of values, 1-30/10 means the same as 1,11,21.

System-wide Crontab Files

The syntax of system-wide crontab files is slightly different than user crontabs. It contains an additional mandatory user field used to specify which user to run the cron job under.

* * * * * <username> command(s)

Predefined Macros

There are several special Cron schedule macros used to specify common intervals. You can use this shortcuts in place of the five column date specification.

  • @yearly (or @annually) – Run the specified task once a year at midnight (12:00am) of 1st of January. Equivalent to 0 0 1 1 *.
  • @monthly – Run the specified task once a month at midnight on the first day of the month. Equivalent to 0 0 1 * *.
  • @weekly – Run the specified task nconcee a week at midnight on Sunday. Equivalent to 0 0 * * 0.
  • @daily – Run the specified task once a day at midnight. Equivalent to 0 0 * * *.
  • @hourly – Run the specified task once an hour at the beginning of the hour. Equivalent to 0 * * * *.
  • @reboot – Run the specified task at the system startup (boot-time).

Linux Crontab Command

The crontab command allows you to install or open a crontab file for editing. You can use the crontab command to view, add, remove or modify cron jobs using the following options:

  • crontab -e – Edit crontab file, or create one if it doesn’t already exist.
  • crontab -l – Display crontab file contents.
  • crontab -r – Remove your current crontab file.
  • crontab -i – Remove your current crontab file with a prompt before removal.
  • crontab -u <username> – Edit other use crontab file. Requires system administrator privileges.

The crontab command opens the crontab file using the editor specified by the VISUAL or EDITOR environment variables.

Crontab Variables

The cron daemon automatically sets several environment variables.

  • The default path is set to PATH=/usr/bin:/bin. If the command you are calling is present in the cron specified path you can either use the absolute path to the command or change the cron $PATH variable. You can’t implicitly append :$PATH as you would do with a normal script.
  • The default shell is set to /bin/sh. You can set a different shell by changing the SHELL variable.
  • Cron invokes the command from the user’s home directory. The HOME variable can be overridden by settings in the crontab.
  • The email notification is sent to the owner of the crontab. To overwrite the default behavior you can use the MAILTO environment variable with a list (comma separated) of all the email addresses you want to receive the email notifications. If MAILTO is defined but empty (MAILTO=””), no mail is sent.

Crontab Restrictions

System administrators can control which users have access to the crontab command by using the /etc/cron.deny and /etc/cron.allow files. The files consist of a list of usernames, one user name per line.

By default only the /etc/cron.deny file exists and is empty which means that all users can use the crontab command. If you want to deny access to the crontab commands to a specific user add the username to this file.

If the /etc/cron.allow file exists, only the users who are listed in this file can use the crontab command.

If neither file exists, only the users with administrative privileges can use crontab command.

Cron Jobs Examples

Below are some cron job examples which will show you how to schedule a task to run on different time periods.

  • Run a command at 15:00 on every day from Monday through Friday:

  • Run a script every 5 minutes and redirected the standard output to dev null, only the standard error will be sent to the specified e-mail address:

  • Run two commands every Monday at 3 PM (use the operator && between the commands):

    0 15 * * Mon command1 && command2

  • Run a PHP script every 2 minutes and write the output to a file:

    */2 * * * * /usr/bin/php /path/to/script.php >> /var/log/script.log

  • Run a script every day, every hour, on the hour, from 8 AM through 4 PM:

    00 08-16 * * * /path/to/script.sh

  • Run a script on the first Monday of each month, at 7 a.m.

    0 7 1-7 * 1 /path/to/script.sh

  • Run the a script at 9:15pm, on the 1st and 15th of every month:

    15 9 1,15 * * /path/to/script.sh

  • Set custom HOME, PATH, SHELL and MAILTO variables and run a command every minute.

    HOME=/opt
    PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
    SHELL=/bin/zsh
    [email protected]

    */1 * * * * command

Conclusion

You have learned how to create cron jobs and schedule tasks at a specific date and time.

Feel free to leave a comment if you have any questions.

Source

Germany Dedicated Server