3 reasons your people aren’t speaking up

The five characteristics of an open organization must work together to ensure healthy and happy communities inside our organizations. Even the most transparent teams, departments, and organizations require equal doses of additional open principles—like inclusivity and collaboration—to avoid dysfunction.

The “open secrets” phenomenon illustrates the limitations of transparency when unaccompanied by additional open values. A recent article in Harvard Business Review explored the way certain organizational issues—widely apparent but seemingly impossible to solve—lead to discomfort in the workforce. Authors Insiya Hussain and Subra Tangirala performed a number of studies, and found that the more people in an organization who knew about a particular “secret,” be it a software bug or a personnel issue, the less likely any one person would be to report the issue or otherwise do something about it.

Hussain and Tangirala explain that so-called “open secrets” are the result of a bystander effect, which comes into play when people think, “Well, if everyone knows, surely I don’t need to be the one to point it out.” The authors mention several causes of this behavior, but let’s take a closer look at why open secrets might be circulating in your organization—with an eye on what an open leader might do to create a safe space for whistleblowing.

1. Fear

People don’t want to complain about a known problem only to have their complaint be the one that initiates the quality assurance, integrity, or redress process. What if new information emerges that makes their report irrelevant? What if they are simply wrong?

At the root of all bystander behavior is fear—fear of repercussions, fear of losing reputation or face, or fear that the very thing you’ve stood up against turns out to be a non-issue for everyone else. Going on record as “the one who reported” carries with it a reputational risk that is very intimidating.

The first step to ensuring that your colleagues report malicious behavior, code, or whatever needs reporting is to create a fear-free workplace. We’re inundated with the idea that making a mistake is bad or wrong. We’re taught that we have to “protect” our reputations. However, the qualities of a good and moral character are always subjective.

Tip for leaders: Reward courage and strength every time you see it, regardless of whether you deem it “necessary.” For example, if in a meeting where everyone except one person agrees on something, spend time on that person’s concerns. Be patient and kind in helping that person change their mind, and be open minded about that person being able to change yours. Brains work in different ways; never forget that one person might have a perspective that changes the lay of the land.

2. Policies

Usually, complaint procedures and policies are designed to ensure fairness towards all parties involved in the complaint. Discouraging false reporting and ensuring such fairness in situations like these is certainly a good idea. But policies might actually deter people from standing up—because a victim might be discouraged from reporting an experience if the formal policy for reporting doesn’t make them feel protected. Standing up to someone in a position of power and saying “Your behavior is horrid, and I’m not going to take it” isn’t easy for anyone, but it’s particularly difficult for marginalized groups.

The “open secrets” phenomenon illustrates the limitations of transparency when unaccompanied by additional open values.

To ensure fairness to all parties, we need to adjust for victims. As part of making the decision to file a report, a victim will be dealing with a variety of internal fears. They’ll wonder what might happen to their self-worth if they’re put in a situation where they have to talk to someone about their experience. They’ll wonder if they’ll be treated differently if they’re the one who stands up, and how that will affect their future working environments and relationships. Especially in a situation involving an open secret, asking a victim to be strong is asking them to have to trust that numerous other people will back them up. This fear shouldn’t be part of their workplace experience; it’s just not fair.

Remember that if one feels responsible for a problem (e.g., “Crap, that’s my code that’s bringing down the whole server!”), then that person might feel fear at pointing out the mistake. The important thing is dealing with the situation, not finding someone to blame. Policies that make people feel personally protected—no matter what the situation—are absolutely integral to ensuring the organization deals with open secrets.

Tip for leaders: Make sure your team’s or organization’s policy regarding complaints makes anonymous reporting possible. Asking a victim to “go on record” puts them in the position of having to defend their perspective. If they feel they’re the victim of harassment, they’re feeling as if they are harassed and being asked to defend their experience. This means they’re doing double the work of the perpetrator, who only has to defend themselves.

3. Marginalization

Women, LGBTQ people, racial minorities, people with physical disabilities, people who are neuro-atypical, and other marginalized groups often find themselves in positions that them feel routinely dismissed, disempowered, disrespected—and generally dissed. These feelings are valid (and shouldn’t be too surprising to anyone who has spent some time looking at issues of diversity and inclusion). Our emotional safety matters, and we tend to be quite protective of it—even if it means letting open secrets go unaddressed.

Marginalized groups have enough worries weighing on them, even when they’re not running the risk of damaging their relationships with others at work. Being seen and respected in both an organization and society more broadly is difficult enough without drawing potentially negative attention.

Policies that make people feel personally protected—no matter what the situation—are absolutely integral to ensuring the organization deals with open secrets.

Luckily, in recent years attitudes towards marginalized groups have become visible, and we as a society have begun to talk about our experiences as “outliers.” We’ve also come to realize that marginalized groups aren’t actually “outliers” at all; we can thank the colorful, beautiful internet for that.

Tip for leaders: Diversity and inclusion plays a role in dispelling open secrets. Make sure your diversity and inclusion practices and policies truly encourage a diverse workplace.

Model the behavior

The best way to create a safe workplace and give people the ability to call attention to pervasive problems found within it is to model the behaviors that you want other people to display. Dysfunction occurs in cultures that don’t pay attention to and value the principles upon which they are built. In order to discourage bystander behavior, transparent, inclusive, adaptable and collaborative communities must create policies that support calling attention to open secrets and then empathetically dealing with whatever the issue may be.

Read this next

Source

Germany Dedicated Server

Working with variables on Linux

Variables often look like $var, but they also look like $1, $*, $? and $$. Let’s take a look at what all these $ values can tell you.

A lot of important values are stored on Linux systems in what we call “variables,” but there are actually several types of variables and some interesting commands that can help you work with them. In a previous post, we looked at environment variables and where they are defined. In this post, we’re going to look at variables that are used on the command line and within scripts.

User variables

While it’s quite easy to set up a variable on the command line, there are a few interesting tricks. To set up a variable, all you need to do is something like this:

$ myvar=11
$ myvar2=”eleven”

To display the values, you simply do this:

$ echo $myvar
11
$ echo $myvar2
eleven

You can also work with your variables. For example, to increment a numeric variable, you could use any of these commands:

$ myvar=$((myvar+1))
$ echo $myvar
12
$ ((myvar=myvar+1))
$ echo $myvar
13
$ ((myvar+=1))
$ echo $myvar
14
$ ((myvar++))
$ echo $myvar
15
$ let “myvar=myvar+1”
$ echo $myvar
16
$ let “myvar+=1”
$ echo $myvar
17
$ let “myvar++”
$ echo $myvar
18

With some of these, you can add more than 1 to a variable’s value. For example:

$ myvar0=0
$ ((myvar0++))
$ echo $myvar0
1
$ ((myvar0+=10))
$ echo $myvar0
11

With all these choices, you’ll probably find at least one that is easy to remember and convenient to use.

You can also unset a variable — basically undefining it.

$ unset myvar
$ echo $myvar

Another interesting option is that you can set up a variable and make it read-only. In other words, once set to read-only, its value cannot be changed (at least not without some very tricky command line wizardry). That means you can’t unset it either.

$ readonly myvar3=1
$ echo $myvar3
1
$ ((myvar3++))
-bash: myvar3: readonly variable
$ unset myvar3
-bash: unset: myvar3: cannot unset: readonly variable

You can use any of those setting and incrementing options for assigning and manipulating variables within scripts, but there are also some very useful internal variables for working within scripts. Note that you can’t reassign their values or increment them.

Internal variables

There are quite a few variables that can be used within scripts to evaluate arguments and display information about the script itself.

  • $1, $2, $3 etc. represent the first, second, third, etc. arguments to the script.
  • $# represents the number of arguments.
  • $* represents the string of arguments.
  • $0 represents the name of the script itself.
  • $? represents the return code of the previously run command (0=success).
  • $$ shows the process ID for the script.
  • $PPID shows the process ID for your shell (the parent process for the script).

Some of these variables also work on the command line but show related information:

  • $0 shows the name of the shell you’re using (e.g., -bash).
  • $$ shows the process ID for your shell.
  • $PPID shows the process ID for your shell’s parent process (for me, this is sshd).

If we throw all of these variables into a script just to see the results, we might do this:

#!/bin/bash

echo $0
echo $1
echo $2
echo $#
echo $*
echo $?
echo $$
echo $PPID

When we call this script, we’ll see something like this:

$ tryme one two three
/home/shs/bin/tryme <== script name
one <== first argument
two <== second argument
3 <== number of arguments
one two three <== all arguments
0 <== return code from previous echo command
10410 <== script’s process ID
10109 <== parent process’s ID

If we check the process ID of the shell once the script is done running, we can see that it matches the PPID displayed within the script:

$ echo $$
10109 <== shell’s process ID

Of course, we’re more likely to use these variables in considerably more useful ways than simply displaying their values. Let’s check out some ways we might do this.

Checking to see if arguments have been provided:

if [ $# == 0 ]; then
echo “$0 filename”
exit 1
fi

Checking to see if a particular process is running:

ps -ef | grep apache2 > /dev/null
if [ $? != 0 ]; then
echo Apache is not running
exit
fi

Verifying that a file exists before trying to access it:

if [ $# -lt 2 ]; then
echo “Usage: $0 lines filename”
exit 1
fi

if [ ! -f $2 ]; then
echo “Error: File $2 not found”
exit 2
else
head -$1 $2
fi

And in this little script, we check if the correct number of arguments have been provided, if the first argument is numeric, and if the second argument is an existing file.

#!/bin/bash

if [ $# -lt 2 ]; then
echo “Usage: $0 lines filename”
exit 1
fi

if [[ $1 != [0-9]* ]]; then
echo “Error: $1 is not numeric”
exit 2
fi

if [ ! -f $2 ]; then
echo “Error: File $2 not found”
exit 3
else
echo top of file
head -$1 $2
fi

Renaming variables

When writing a complicated script, it’s often useful to assign names to the script’s arguments rather than continuing to refer to them as $1, $2, and so on. By the 35th line, someone reading your script might have forgotten what $2 represents. It will be a lot easier on that person if you assign an important parameter’s value to $filename or $numlines.

#!/bin/bash

if [ $# -lt 2 ]; then
echo “Usage: $0 lines filename”
exit 1
else
numlines=$1
filename=$2
fi

if [[ $numlines != [0-9]* ]]; then
echo “Error: $numlines is not numeric”
exit 2
fi

if [ ! -f $ filename]; then
echo “Error: File $filename not found”
exit 3
else
echo top of file
head -$numlines $filename
fi

Of course, this example script does nothing more than run the head command to show the top X lines in a file, but it is meant to show how internal parameters can be used within scripts to help ensure the script runs well or fails with at least some clarity.

Join the Network World communities on

Facebook

and

LinkedIn

to comment on topics that are top of mind.

Source

Germany Dedicated Server

Top 26 Tools for VMware Administrators

VMware software provides cloud computing and platform virtualization services to various users and it supports working with several tools that extend its abilities.

Read Also: How to Install VMware Workstation Pro 14 on Linux Systems

There are so many tools for administrators that it is challenging to keep track of them all. But don’t worry, I will give you a head-start by listing the best/most useful tools listed in alphabetic order and according to popular demand.

1. As Built Report

As Built Report is an open source configuration document framework which generates and builds documents in XML, Text, HTML, and MS Word formats using Windows PowerShell and PScribo.

You can use As Built Report to easily run and generate reports against your IT environment and provide contributors with the ability to easily create new reports for any IT vendor and technology with support for a RESTful API and/or the Windows PowerShell.

As Built Report

As Built Report

2. Cross vCenter Workload Migration Utility

Cross vCenter Workload Migration Utility is a tool with which you can migrate virtual machines between vCenter servers via the Cross-vCenter vMotion feature easily using a GUI.

It auto-populates inventory for ease of management, enables batch migration of multiple VMs in parallel, and implements REST API for automating migration tasks.

Cross vCenter Workload Migration Utility

Cross vCenter Workload Migration Utility

3. ESXTOP

ESXTOP is a nifty command line tool that comes alongside vSphere for helping admins sniff out and fix performance issues in real-time.

It displays information on the resource management of your vSphere environment with details about the disk, CPU, network, and memory usage all in real-time.

ESXTOP

ESXTOP

4. Vmware Git

Git is an open source version control system created by the creator of Linux, Mr Linus Torvalds, in 2005. It has thousands of contributors, a large community base for support, and is compatible with several IDEs and OSes including VMware.

5. HCI Bench

Hyper-converged Infrastructure Benchmark stylized as HCI Bench is an automation wrapper for the open source VDbench benchmark tool that simplifies automated testing across HCI clusters.

HCI Bench aims to accelerate customer POC performance testing via a controlled and consistent way by fully automating the end-to-end process of launching test Virtual Machines, regulating workload runs, aggregating test results, and collecting valuable data for the purpose of troubleshooting.

6. Hyper

Hyper is a cross-platform, customizable, open source terminal application built according to modern web standards and it aims to be the simplest and most powerful of its kind. Read more about Hyper here.

Hyper Linux Terminal App

Hyper Linux Terminal App

7. IOInsight

IOInsight is a virtual tool that ships with VMware to enable users to understand the storage I/O behavior of their Virtual Machine. It features a web-based User Interface through which users can choose which VMDK to monitor and display results in order to make better choices regarding performance tuning and storage capacity.

VMware IOInsight Tool

VMware IOInsight Tool

8. Linux VSM

Linux VSM is an improved port of the Linux software manager for VMware. With it, users can log into My VMware, access downloads information, and view the download subsets that VSM allows.

Linux VSM is designed to be slightly smarter than the version of VSM for macOS and Linux. For example, instead of breaking the operation, it ignores missing files.

9. vRealize Log Insight

VMware’s vRealize Log Insight is a virtual tool with which administrators can view, manage, and analyze Syslog data thereby gaining the ability to troubleshoot vSphere and perform compliance and security checks.

VMware’s Log Insight

VMware’s Log Insight

10. mRemoteNG

mRemoteNG is an open source, multi-protocol, tabbed remote connections manager created as a fork of mRemote with new features and bug fixes. It supports Virtual Network Computing (VNC), SSH, rlogin, HTTP[S], Citrix Independent Computing Architecture (ICA), and Remote Desktop/Terminal Server (RDP).

mRemoteNG

mRemoteNG

11. pgAdmin

pgAdmin is the most popular, feature-rich tool for managing PostgreSQL and its derivative databases.

Its features include its availability for Windows, macOS and Linux, an extensive online documentation, a powerful query tool for syntax highlighting, multiple deployment models, and support for most PostgreSQL server-side encodings, among other features.

pgAdmin - PostgreSQL Tools

pgAdmin – PostgreSQL Tools

12. pocli

pocli is a Python-based tool that provides a lightweight command line client for ownCloud to be used for basic file operations such as upload, download, and directory management.

pocli’s development was motivated by the absence of a tool capable of quickly uploading and/or downloading files on computers operated without a GUI.

13. Postman

Postman is a nifty HTTP client for testing web services and it was created to simplify the process of developing, testing, and documenting APIs by enabling users to quickly make both simple and complex HTTP requests.

Postman is free for individuals and small teams and offers a monthly subscription with advanced features for teams with up to 50 users and enterprise solutions.

14. PowerCLI

PowerCLI is a powerful application for automating and managing VMware vSphere configurations capable of working with virtually any VMware product.

This command line tool is built on Windows PowerShell to provide 600+ cmdlets for managing not only vSphere and VMware but also vCloud, vSAN, VMware Site Recovery Manager, NSX-T, VMware HCX, etc.

PowerCLI

PowerCLI

15. RVTools

RVTools is a .NET application that uses the VI SDK to display vital data about your virtual environments it interacts with several technologies including VirtualCenter Appliance, ESX Server 4i, ESX Server 4.x, ESX Server 3i, VirtualCenter 2.5, to list a few.

With over a million downloads under its belt, RVTools is excellent at displaying information about your virtual environment’s CD drive, snapshots, ESX hosts, VM kernels, Datastores, health checks, license info, Resource pools, etc. and you can use it to update your VMTools to their latest version.

It is free to download and use after subscribing to Veeam’s mailing list which offers subscribers nifty product suggestions related to VMware. As always, however, you can unsubscribe from the list afterwards.

RVTools

RVTools

16. vCenter Converter

vCenter Converter is a tool for converting both local and remote physical machines into virtual machines without experiencing any downtime. It features a centralized console for managing multiple simultaneous conversions both locally and remotely.

17. vCheck

vCheck is an HTML framework script designed to work with PowerShell for scheduling automated tasks to send you information in a readable format via email.

vCheck is a smart script because it sends you only vital information, omitting details that are not necessary. For example, you will not receive any info about datastore disk space if there is sufficient space.

vCheck

vCheck

18. vDocumentation

vDocumentation offers users with sets of PowerCLI scripts created by the PowerShell community to provide infrastructure documentation of vSphere environments in CSV or Excel formats. It is maintained by Ariel and Edgar Sanchez.

vDocumentation

vDocumentation

19. VMware API Explorer

The VMware API Explorer enables you to browse, search, and inspect APIs across any major VMware platform not excluding vRealize, NSX, vCloud Suite, and vSphere. You can use the explorer to easily access SDKs and code samples, among other resources, specific to selected APIs.

20. VMware Capacity Planner

The VMware vCenter CapacityIQ tool enables administrators to analyze, forecast, and plan the capacity requirements of their virtual desktop environments or data centers.

21. VMware Health Analyzer

VMware Health Analyzer (vHA) is used to assess VMware environments based on standardized practices. It is used by VMware Partners / Solution Providers and is currently available to only clients with access to Partner Central and VMware Employees.

22. VMware OS Optimization Tool

VMware OS Optimization Tool that enables admins to optimize Windows 7 to 10 systems for use with VMware Horizon View. Its features include customizable templates across multiple systems, etc. You can use VMware OS Optimization Tool for managing templates, optimizing history and rollback, performing both remote and local analysis.

VMware OS Optimization Tool

VMware OS Optimization Tool

23. VMware Project Onyx

Project Onyx is a utility for generating code based on the mouse clicks made in the vSphere client. Its aim is to make it easy to visualize what goes on under the hood in order to speed up the development of scripts.

Project Onyx monitors the network communication between the vSphere client and vCenter server and translates it into executable PowerShell code which could be modified into a reusable script or function.

24. VMware Skyline

VMware Skyline is an automated support technology that aims to increase team productivity and the overall reliability of VMware environments by helping customers to avoid problems before they occur.

25. VMware vRealize Orchestrator

VMware vRealize Orchestrator is among the most powerful VMware admin tools as it allows users to create workflows that automate several daily tasks using a drag-and-drop GUI. It also has an extensive library of plugins in the VMware Solution Exchange for 3rd-party solutions and extending its features.

26. WinSSHterm

WinSSHterm is a production-ready SSH client for Windows that combines WinSCP, PuTTY/KiTTY, and VcXsrv into a tabbed solution. Its features include using a master password, template variables, eye-friendly terminal colours, keyboard shortcuts, etc.

WinSSHTerm - SSH Client

WinSSHTerm – SSH Client

That wraps up my list of the best tools that are useful to VMware administrators for planning, deployment, and management. Have you got other tools that we could add to the list? Or do you have something to say about the integrity of the tools? Feel free to drop your thoughts in the comments section below.

Source

Germany Dedicated Server

How to Install Kodi on Ubuntu 18.04

Kodi (formerly XBMC) is a free and open source cross-platform media player and entertainment hub that lets you organize and play streaming media, such as videos, podcasts, music, from the Internet and local and network storage.

You can enhance the Kodi functionality by installing new add-ons and skins from the official Kodi repository and unofficial third-party repositories.

In this tutorial, we’ll walk you through how to install Kodi on Ubuntu 18.04. The same instructions apply for Ubuntu 16.04 and any other Ubuntu based distribution, including Kubuntu, Linux Mint and Elementary OS.

Prerequisites

Before continuing with this tutorial, make sure you are logged in as a user with sudo privileges.

Installing Kodi on Ubuntu

The version of Kodi included in the Ubuntu repositories always lag behind the latest Kodi realise. At the time of writing this article, the latest stable version of Kodi is version 17 Krypton.

We’ll install Kodi 17 from their official repositories. It requires no technical knowledge and it should not take you more than 10 minutes to install and configure the media server.

Follow the steps below to install the Kodi on your Ubuntu system:

  1. Start by updating the packages list and install the dependencies by typing:

    sudo apt update
    sudo apt install software-properties-common apt-transport-https

  2. Add the Kodi APT repository to your system’s software repository list by issuing:

    sudo add-apt-repository ppa:team-xbmc/ppa

    When prompted press Enter to continue:

    Official Team Kodi stable releases
    More info: https://launchpad.net/~team-xbmc/+archive/ubuntu/ppa
    Press [ENTER] to continue or Ctrl-c to cancel adding it

  3. Once the Kodi repository is enabled, update the apt package list and install the latest version of Kodi with:

    sudo apt update
    sudo apt install kodi

That’s it! At this point, you have successfully installed Kodi on your Ubuntu 18.04 system.

Starting Kodi

Now that Kodi is installed on your Ubuntu system you can start it either from the command line by typing kodi or by clicking on the Kodi icon (Activities -> Kodi):

When you start Kodi for the first time, a window like the following will appear:

From here you can start customizing your Kodi instance by installing new Addons and adding media libraries.

To exit Kodi either click on the “power-off” button on the top left or press CTRL+END.

Updating Kodi

When a new version is released you can update the Kodi package through your desktop standard Software Update tool or by running the following commands in your terminal:

sudo apt update
sudo apt upgrade

Uninstalling Kodi

If you want to uninstall Kodi, simply remove the installed package and disable the repository with the following command:

sudo apt remove –auto-remove kodi
sudo add-apt-repository –remove ppa:team-xbmc/ppa

Then remove the Kodi configuration directory by typing:

Conclusion

You have learned how to install Kodi Media Server on your Ubuntu 18.04 machine. You should now visit the official Kodi Wiki page and learn how to configure and manage your Kodi installation.

If you have any question, please leave a comment below.

Source

Germany Dedicated Server

10 Best File and Disk Encryption Tools for Linux

by Martins D. Okoi | Published: April 3, 2019 |
Last Updated: April 3, 2019

‘,
enableHover: false,
enableTracking: true,
buttons: { twitter: },
click: function(api, options){
api.simulateClick();
api.openPopup(‘twitter’);
}
});
jQuery(‘#facebook’).sharrre({
share: {
facebook: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
click: function(api, options){
api.simulateClick();
api.openPopup(‘facebook’);
}
});
jQuery(‘#googleplus’).sharrre({
share: {
googlePlus: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
urlCurl: ‘https://www.tecmint.com/wp-content/themes/tecmint/js/sharrre.php’,
click: function(api, options){
api.simulateClick();
api.openPopup(‘googlePlus’);
}
});
jQuery(‘#linkedin’).sharrre({
share: {
linkedin: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
buttons: {
linkedin: {
description: ’10 Best File and Disk Encryption Tools for Linux’,media: ‘https://www.tecmint.com/wp-content/uploads/2019/04/Linux-File-Disk-Encryption-Tools.png’ }
},
click: function(api, options){
api.simulateClick();
api.openPopup(‘linkedin’);
}
});
// Scrollable sharrre bar, contributed by Erik Frye. Awesome!
var shareContainer = jQuery(“.sharrre-container”),
header = jQuery(‘#header’),
postEntry = jQuery(‘.entry’),
$window = jQuery(window),
distanceFromTop = 20,
startSharePosition = shareContainer.offset(),
contentBottom = postEntry.offset().top + postEntry.outerHeight(),
topOfTemplate = header.offset().top;
getTopSpacing();
shareScroll = function(){
if($window.width() > 719){
var scrollTop = $window.scrollTop() + topOfTemplate,
stopLocation = contentBottom – (shareContainer.outerHeight() + topSpacing);
if(scrollTop > stopLocation){
shareContainer.offset();
}
else if(scrollTop >= postEntry.offset().top-topSpacing){
shareContainer.offset();
}else if(scrollTop 1024)
topSpacing = distanceFromTop + jQuery(‘.nav-wrap’).outerHeight();
else
topSpacing = distanceFromTop;
}
});

If You Appreciate What We Do Here On TecMint, You Should Consider:

  1. Stay Connected to: Twitter | Facebook | Google Plus
  2. Subscribe to our email updates: Sign Up Now
  3. Get your own self-hosted blog with a Free Domain at ($3.45/month).
  4. Become a Supporter – Make a contribution via PayPal
  5. Support us by purchasing our premium books in PDF format.
  6. Support us by taking our online Linux courses

We are thankful for your never ending support.

View all Posts

Martins D. Okoi

Martins Divine Okoi is a graduate of Computer Science with a passion for Linux and the Open Source community. He works as a Graphic Designer, Web Developer, and programmer.

Your name can also be listed here. Got a tip? Submit it here to become an TecMint author.

Source

Germany Dedicated Server

How to Rename Files in Linux

Renaming files and directories is one of the most basic tasks you often need to perform on a Linux system.

Renaming a single file is easy, but renaming multiple files at once can be a challenge, especially for users who are new to Linux. You can rename files using a GUI file manager or via the command-line terminal.

In this tutorial, we will show you how to use the mv and rename commands to rename files and directories.

Renaming files with mv Command

The mv command (short from move) is used to rename or move files from one location to another. The syntax for the mv command is as follows:

mv [OPTIONS] source destination

The source can be one or more files or directories and destination can be a single file or directory.

  • If you specify multiple files as source, the destination must be a directory. In this case, the source files are moved to the target directory.
  • If you specify a single file as source, and the destination target is an existing directory then the file is moved to the specified directory.
  • To rename a file you need to specify a single file as source, and single file as destination target.

For example, to rename the file file1.txt as file2.txt you would run:

Renaming multiple files with mv Command

The mv command can rename only one file at a time but it can be used in conjunction with other commands such as find or inside bash for or while loops to rename multiple files.

The following example shows how to use the Bash for loop to rename all .html files in the current directory by changing the .html extension to .php.

for f in *.html; do
mv — “$f” “$.php”
done

Let’s analyze the code line by line:

  • The first line creates a for loop and iterates through a list of all files edging with .html.
  • The second line applies to each item of the list and moves the file to a new one replacing .html with .php. The part $ is using the shell parameter expansion to remove the .html part from the filename.
  • done indicates the end of the loop segment.

We can also use the mv command in combination with find to achieve the same as above.

find . -depth -name “*.html” -exec sh -c ‘f=”{}”; mv — “$f” “$.php”‘ ;

The find command is passing all files ending with .html in the current directory to the mv command one by one using the -exec switch. The string {} is the name of the file currently being processed.

As you can see from the examples above, renaming multiple files using the mv command is not an easy task as it requires a good knowledge of Bash scripting.

Renaming files with rename Command

The rename command is used to rename multiple files. This command is more advanced than mv as it requires some basic knowledge of regular expressions.

There are two versions of the rename command with different syntax. In this tutorial, we will be using the perl version of the rename command. If you don’t have this version installed on your system, you can easily install it using the package manager of your distribution.

  • Install rename on Ubuntu and Debian

  • Install rename on CentOS and Fedora

  • Install rename on Arch Linux

    yay perl-rename ## or yaourt -S perl-rename

The syntax for the rename command is as follows:

rename [OPTIONS] perlexpr files

The rename command will rename all files according to the specified perlexpr regular expression. You can read more about perl regular expressions here.

For example, the following command will change all files with the extension .html to .php:

rename ‘s/.html/.php/’ *.html

You can use the -n argument to print names of files to be renamed, without renaming them.

rename -n ‘s/.html/.php/’ *.html

The output will look something like this:

rename(file-90.html, file-90.php)
rename(file-91.html, file-91.php)
rename(file-92.html, file-92.php)
rename(file-93.html, file-93.php)
rename(file-94.html, file-94.php)

By default, the rename command will not overwrite existing files. Pass the -f argument to allow existing files to be over-written.

rename -f ‘s/.html/.php/’ *.html

Below are a few more common examples of how to use the rename command:

  • Replace spaces in filenames with underscores

  • Convert filenames to lowercase

  • Convert filenames to uppercase

Conclusion

By now you should have a good understanding of how to use the mv and rename commands to rename files. Of course, there are other commands to rename files in Linux such as mmv. New Linux users who are intimidated by the command line can use GUI batch rename tools such as the Métamorphose.

If you have any question or feedback feel free to leave a comment.

Source

Germany Dedicated Server

ShadowReader: Serverless load tests for replaying production traffic

While load testing has become more accessible, configuring load tests that faithfully re-create production conditions can be difficult. A good load test must use a set of URLs that are representative of production traffic and achieve request rates that mimic real users. Even performing distributed load tests requires the upkeep of a fleet of servers.

ShadowReader aims to solve these problems. It gathers URLs and request rates straight from production logs and replays them using AWS Lambda. Being serverless, it is more cost-efficient and performant than traditional distributed load tests; in practice, it has scaled beyond 50,000 requests per minute.

At Edmunds, we have been able to utilize these capabilities to solve problems, such as Node.js memory leaks that were happening only in production, by recreating the same conditions in our QA environment. We’re also using it daily to generate load for pre-production canary deployments.

The memory leak problem we faced in our Node.js application confounded our engineering team; as it was only occurring in our production environment; we could not reproduce it in QA until we introduced ShadowReader to replay production traffic into QA.

The incident

On Christmas Eve 2017, we suffered an incident where there was a jump in response time across the board with error rates tripling and impacting many users of our website.

Monitoring during the incident helped identify and resolve the issue quickly, but we still needed to understand the root cause.

At Edmunds, we leverage a robust continuous delivery (CD) pipeline that releases new updates to production multiple times a day. We also dynamically scale up our applications to accommodate peak traffic and scale down to save costs. Unfortunately, this had the side effect of masking a memory leak.

In our investigation, we saw that the memory leak had existed for weeks, since early December. Memory usage would climb to 60%, along with a slow increase in 99th percentile response time.

Between our CD pipeline and autoscaling events, long-running containers were frequently being shut down and replaced by newer ones. This inadvertently masked the memory leak until December, when we decided to stop releasing software to ensure stability during the holidays.

Our CD pipeline

At a glance, Edmunds’ CD pipeline looks like this:

  1. Unit test
  2. Build a Docker image for the application
  3. Integration test
  4. Load test/performance test
  5. Canary release

The solution is fully automated and requires no manual cutover. The final step is a canary deployment directly into the live website, allowing us to release multiple times a day.

For our load testing, we leveraged custom tooling built on top of JMeter. It takes random samples of production URLs and can simulate various percentages of traffic. Unfortunately, however, our load tests were not able to reproduce the memory leak in any of our pre-production environments.

Solving the memory leak

When looking at the memory patterns in QA, we noticed there was a very healthy pattern. Our initial hypothesis was that our JMeter load testing in QA was unable to simulate production traffic in a way that allows us to predict how our applications will perform.

While the load test takes samples from production URLs, it can’t precisely simulate the URLs customers use and the exact frequency of calls (i.e., the burst rate).

Our first step was to re-create the problem in QA. We used a new tool called ShadowReader, a project that evolved out of our hackathons. While many projects we considered were product-focused, this was the only operations-centric one. It is a load-testing tool that runs on AWS Lambda and can replay production traffic and usage patterns against our QA environment.

The results it returned were immediate:

Knowing that we could re-create the problem in QA, we took the additional step to point ShadowReader to our local environment, as this allowed us to trigger Node.js heap dumps. After analyzing the contents of the dumps, it was obvious the memory leak was coming from two excessively large objects containing only strings. At the time the snapshot dumped, these objects contained 373MB and 63MB of strings!

We found that both objects were temporary lookup caches containing metadata to be used on the client side. Neither of these caches was ever intended to be persisted on the server side. The user’s browser cached only its own metadata, but on the server side, it cached the metadata for all users. This is why we were unable to reproduce the leak with synthetic testing. Synthetic tests always resulted in the same fixed set of metadata in the server-side caches. The leak surfaced only when we had a sufficient amount of unique metadata being generated from a variety of users.

Once we identified the problem, we were able to remove the large caches that we observed in the heap dumps. We’ve since instrumented the application to start collecting metrics that can help detect issues like this faster.

After making the fix in QA, we saw that the memory usage was constant and the leak was plugged.

What is ShadowReader?

ShadowReader is a serverless load-testing framework powered by AWS Lambda and S3 to replay production traffic. It mimics real user traffic by replaying URLs from production at the same rate as the live website. We are happy to announce that after months of internal usage, we have released it as open source!

Features

  • ShadowReader mimics real user traffic by replaying user requests (URLs). It can also replay certain headers, such as True-Client-IP and User-Agent, along with the URL.
  • It is more efficient cost- and performance-wise than traditional distributed load tests that run on a fleet of servers. Managing a fleet of servers for distributed load testing can cost $1,000 or more per month; with a serverless stack, it can be reduced to $100 per month by provisioning compute resources on demand.
  • We’ve scaled it up to 50,000 requests per minute, but it should be able to handle more than 100,000 reqs/min.
  • New load tests can be spun up and stopped instantly, unlike traditional load-testing tools, which can take many minutes to generate the test plan and distribute the test data to the load-testing servers.
  • It can ramp traffic up or down by a percentage value to function as a more traditional load test.
  • Its plugin system enables you to switch out plugins to change its behavior. For instance, you can switch from past replay (i.e., replays past requests) to live replay (i.e., replays requests as they come in).

How it works

ShadowReader is composed of four different Lambdas: a Parser, an Orchestrator, a Master, and a Worker.

When a user visits a website, a load balancer (in this case, an ELB) typically routes the request. As the ELB routes the request, it will log the event and ship it to S3.

Next, ShadowReader triggers a Parser Lambda every minute via a CloudWatch event, which parses the latest access (ELB) logs on S3 for that minute, then ships the parsed URLs into another S3 bucket.

On the other side of the system, ShadowReader also triggers an Orchestrator lambda every minute. This Lambda holds the configurations and state of the system.

The Orchestrator then invokes a Master Lambda function. From the Orchestrator, the Master receives information on which time slice to replay and downloads the respective data from the S3 bucket of parsed URLs (deposited there by the Parser).

The Master Lambda divides the load-test URLs into smaller batches, then invokes and passes each batch into a Worker Lambda. If 800 requests must be sent out, then eight Worker Lambdas will be invoked, each one handling 100 URLs.

Finally, the Worker receives the URLs passed from the Master and starts load-testing the chosen test environment.

The bigger picture

The challenge of reproducibility in load testing serverless infrastructure becomes increasingly important as we move from steady-state application sizing to on-demand models. While ShadowReader is designed and used with Edmunds’ infrastructure in mind, any application leveraging ELBs can take full advantage of it. Soon, it will have support to replay the traffic of any service that generates traffic logs.

As the project moves forward, we would love to see it evolve to be compatible with next-generation serverless runtimes such as Knative. We also hope to see other open source communities build similar toolchains for their infrastructure as serverless becomes more prevalent.

Getting started

If you would like to test drive ShadowReader, check out the GitHub repo. The README contains how-to guides and a batteries-included demo that will deploy all the necessary resources to try out live replay in your AWS account.

We would love to hear what you think and welcome contributions. See the contributing guide to get started!

This article is based on “How we fixed a Node.js memory leak by using ShadowReader to replay production traffic into QA,” published on the Edmunds Tech Blog with the help of Carlos Macasaet, Sharath Gowda, and Joey Davis. Yuki Sawa also presented this as ShadowReader—Serverless load tests for replaying production traffic at (SCaLE 17x) March 7-10 in Pasadena, Calif.

Source

Germany Dedicated Server

11 Best Notepad++ Alternatives For Linux

by Martins D. Okoi | Published: April 4, 2019 |

April 4, 2019

‘,
enableHover: false,
enableTracking: true,
buttons: { twitter: },
click: function(api, options){
api.simulateClick();
api.openPopup(‘twitter’);
}
});
jQuery(‘#facebook’).sharrre({
share: {
facebook: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
click: function(api, options){
api.simulateClick();
api.openPopup(‘facebook’);
}
});
jQuery(‘#googleplus’).sharrre({
share: {
googlePlus: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
urlCurl: ‘https://www.tecmint.com/wp-content/themes/tecmint/js/sharrre.php’,
click: function(api, options){
api.simulateClick();
api.openPopup(‘googlePlus’);
}
});
jQuery(‘#linkedin’).sharrre({
share: {
linkedin: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
buttons: {
linkedin: {
description: ’11 Best Notepad++ Alternatives For Linux’,media: ‘https://www.tecmint.com/wp-content/uploads/2019/04/Best-Notepad-Alternatives-For-Linux.png’ }
},
click: function(api, options){
api.simulateClick();
api.openPopup(‘linkedin’);
}
});
// Scrollable sharrre bar, contributed by Erik Frye. Awesome!
var shareContainer = jQuery(“.sharrre-container”),
header = jQuery(‘#header’),
postEntry = jQuery(‘.entry’),
$window = jQuery(window),
distanceFromTop = 20,
startSharePosition = shareContainer.offset(),
contentBottom = postEntry.offset().top + postEntry.outerHeight(),
topOfTemplate = header.offset().top;
getTopSpacing();
shareScroll = function(){
if($window.width() > 719){
var scrollTop = $window.scrollTop() + topOfTemplate,
stopLocation = contentBottom – (shareContainer.outerHeight() + topSpacing);
if(scrollTop > stopLocation){
shareContainer.offset();
}
else if(scrollTop >= postEntry.offset().top-topSpacing){
shareContainer.offset();
}else if(scrollTop 1024)
topSpacing = distanceFromTop + jQuery(‘.nav-wrap’).outerHeight();
else
topSpacing = distanceFromTop;
}
});

If You Appreciate What We Do Here On TecMint, You Should Consider:

  1. Stay Connected to: Twitter | Facebook | Google Plus
  2. Subscribe to our email updates: Sign Up Now
  3. Get your own self-hosted blog with a Free Domain at ($3.45/month).
  4. Become a Supporter – Make a contribution via PayPal
  5. Support us by purchasing our premium books in PDF format.
  6. Support us by taking our online Linux courses

We are thankful for your never ending support.

View all Posts

Martins D. Okoi

Martins Divine Okoi is a graduate of Computer Science with a passion for Linux and the Open Source community. He works as a Graphic Designer, Web Developer, and programmer.

Your name can also be listed here. Got a tip? Submit it here to become an TecMint author.

Source

Germany Dedicated Server

9 open source tools for building a fault-tolerant system

I’ve always been interested in web development and software architecture because I like to see the broader picture of a working system. Whether you are building a mobile app or a web application, it has to be connected to the internet to exchange data among different modules, which means you need a web service.

If you use a cloud system as your application’s backend, you can take advantage of greater computing power, as the backend service will scale horizontally and vertically and orchestrate different services. But whether or not you use a cloud backend, it’s important to build a fault-tolerant system—one that is resilient, stable, fast, and safe.

To understand fault-tolerant systems, let’s use Facebook, Amazon, Google, and Netflix as examples. Millions and billions of users access these platforms simultaneously while transmitting enormous amounts of data via peer-to-peer and user-to-server networks, and you can be sure there are also malicious users with bad intentions, like hacking or denial-of-service (DoS) attacks. Even so, these platforms can operate 24 hours a day and 365 days a year without downtime.

Although machine learning and smart algorithms are the backbones of these systems, the fact that they achieve consistent service without a single minute of downtime is praiseworthy. Their expensive hardware and gigantic datacenters certainly matter, but the elegant software designs supporting the services are equally important. And the fault-tolerant system is one of the principles to build such an elegant system.

Two behaviors that cause problems in production

Here’s another way to think of a fault-tolerant system. When you run your application service locally, everything seems to be fine. Great! But when you promote your service to the production environment, all hell breaks loose. In a situation like this, a fault-tolerant system helps by addressing two problems: Fail-stop behavior and Byzantine behavior.

Fail-stop behavior

Fail-stop behavior is when a running system suddenly halts or a few parts of the system fail. Server downtime and database inaccessibility fall under this category. For example, in the diagram below, Service 1 can’t communicate with Service 2 because Service 2 is inaccessible:

But the problem can also occur if there is a network problem between the services, like this:

Byzantine behavior

Byzantine behavior is when the system continuously runs but doesn’t produce the expected behavior (e.g., wrong data or an invalid value).

Byzantine failure can happen if Service 2 has corrupted data or values, even though the service looks to be operating just fine, like in this example:

Or, there can be a malicious middleman intercepting between the services and injecting unwanted data:

Neither fail-stop nor Byzantine behavior is a desired situation, so we need ways to prevent or fix them. That’s where fault-tolerant systems come into play. Following are eight open source tools that can help you address these problems.

Although building a truly practical fault-tolerant system touches upon in-depth distributed computing theory and complex computer science principles, there are many software tools—many of them, like the following, open source—to alleviate undesirable results by building a fault-tolerant system.

Circuit-breaker pattern: Hystrix and Resilience4j

The circuit-breaker pattern is a technique that helps to return a prepared dummy response or a simple response when a service fails:

Netflix’s open source Hystrix is the most popular implementation of the circuit-breaker pattern.

Many companies where I’ve worked previously are leveraging this wonderful tool. Surprisingly, Netflix announced that it will no longer update Hystrix. (Yeah, I know.) Instead, Netflix recommends using an alternative solution like Resilence4j, which supports Java 8 and functional programming, or an alternative practice like Adaptive Concurrency Limit.

Load balancing: Nginx and HaProxy

Load balancing is one of the most fundamental concepts in a distributed system and must be present to have a production-quality environment. To understand load balancers, we first need to understand the concept of redundancy. Every production-quality web service has multiple servers that provide redundancy to take over and maintain services when servers go down.

Think about modern airplanes: their dual engines provide redundancy that allows them to land safely even if an engine catches fire. (It also helps that most commercial airplanes have state-of-art, automated systems.) But, having multiple engines (or servers) means that there must be some kind of scheduling mechanism to effectively route the system when something fails.

A load balancer is a device or software that optimizes heavy traffic transactions by balancing multiple server nodes. For instance, when thousands of requests come in, the load balancer acts as the middle layer to route and evenly distribute traffic across different servers. If a server goes down, the load balancer forwards requests to the other servers that are running well.

There are many load balancers available, but the two best-known ones are Nginx and HaProxy.

Nginx is more than a load balancer. It is an HTTP and reverse proxy server, a mail proxy server, and a generic TCP/UDP proxy server. Companies like Groupon, Capital One, Adobe, and NASA use it.

HaProxy is also popular, as it is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. Many large internet companies, including GitHub, Reddit, Twitter, and Stack Overflow, use HaProxy. Oh and yes, Red Hat Enterprise Linux also supports HaProxy configuration.

Actor model: Akka

The actor model is a concurrency design pattern that delegates responsibility when an actor, which is a primitive unit of computation, receives a message. An actor can create even more actors and delegate the message to them.

Akka is one of the most well-known tools for the actor model implementation. The framework supports Java and Scala, which are both based on JVM.

Asynchronous, non-blocking I/O using messaging queue: Kafka and RabbitMQ

Multi-threaded development has been popular in the past, but this practice has been discouraged and replaced with asynchronous, non-blocking I/O patterns. For Java, this is explicitly stated in its Enterprise Java Bean (EJB) specifications:

“An enterprise bean must not use thread synchronization primitives to synchronize execution of multiple instances.

“The enterprise bean must not attempt to manage threads. The enterprise bean must not attempt to start, stop, suspend, or resume a thread, or to change a thread’s priority or name. The enterprise bean must not attempt to manage thread groups.”

Now, there are other practices like stream APIs and actor models. But messaging queues like Kafka and RabbitMQ offer the out-of-box support for asynchronous and non-blocking IO features, and they are powerful open source tools that can be replacements for threads by handling concurrent processes.

Other options: Eureka and Chaos Monkey

Other useful tools for fault-tolerant systems include monitoring tools, such as Netflix’s Eureka, and stress-testing tools, like Chaos Monkey. They aim to discover potential issues earlier by testing in lower environments, like integration (INT), quality assurance (QA), and user acceptance testing (UAT), to prevent potential problems before moving to the production environment.

What open source tools are you using for building a fault-tolerant system? Please share your favorites in the comments.

Source

Germany Dedicated Server

How to Create Bash Aliases

Do you often find yourself typing a long command on the command line or searching the bash history for a previously typed command? If your answer to any of those questions is yes, then you will find bash aliases handy. Bash aliases allow you to set a memorable shortcut command for a longer command.

Bash aliases are essentially shortcuts that can save you from having to remember long commands and eliminate a great deal of typing when you are working on the command line. For example, you could set the alias tgz to be a shortcut for the tar -xvfz command.

This article explains how to create bash aliases so you can be more productive on the command line.

Creating Bash Aliases

Creating aliases in bash is very straight forward. The syntax is as follows:

alias alias_name=”command_to_run”

To create a new bash alias start by typing alias keyword. Then declare the alias name followed by an equal sign and the command you want to run when you type the alias. The command needs to be enclosed in quotes and with no spacing around the equal sign. Each alias needs to be declared on a new line.

The ls command is probably one of the most used commands on the Linux command line. I usually use this command with the -la switch to list out all files and directories including the hidden ones in long list format.

Let’s create a simple bash alias named ll which will be a shortcut for the ls -la command. To do so type open a terminal window and type:

Now if you type ll in your console you’ll get the same output as you would by typing ls -la.

The ll alias will be available only in the current shell session. If you exit the session or open a new session from another terminal the alias will not be available.

To make the alias persistent you need to declare it in the ~/.bash_profile or ~/.bashrc file. Open the ~/.bashrc in your text editor:

and add your aliases:

~/.bashrc

# Aliases
# alias alias_name=”command_to_run”

# Long format list
alias ll=”ls -la”

# Print my public IP
alias myip=’curl ipinfo.io/ip’

You should name your aliases in a way that is easy to remember. It is also recommended to add a comment for future reference.

Once done, save and close the file. Make the aliases available in your current session by typing:

As you can see, creating simple bash aliases is quick and very easy.

If you want to make your .bashrc more modular you can store your aliases in a separate file. Some distributions like Ubuntu and Debian include a .bash_aliases file, which is sourced from the ~/.bashrc.

Creating Bash Aliases with Arguments (Bash Functions)

Sometimes you may need to create an alias that accepts one or more arguments, that’s where bash functions come in handy.

The syntax for creating a bash function is very easy. They may be declared in two different formats:

function_name () {
[commands]
}

or

function function_name {
[commands]
}

To pass any number of arguments to the bash function simply put them right after the function’s name, separated by a space. The passed parameters are $1, $2, $3, etc, corresponding to the position of the parameter after the function’s name. The $0 variable is reserved for the function name.

Let’s start by creating a simple bash function which will create a directory and then navigate into it:

~/.bashrc

mkcd ()
{
mkdir -p — “$1” && cd -P — “$1”
}

Same as when creating new aliases, add the function to your ~/.bashrc file and run source ~/.bash_profile to reload.

Now instead of using mkdir to create a new directory and then cd to move into that directory, you can simply type:

If you wonder what are — and && here is a short explanation.

  • — – makes sure you’re not accidentally passing an extra argument to the command. For example, if you try to create a directory that starts with – (dash) without using — the directory name will be interpreted as a command argument.
  • && – ensures that the second command runs only if the first command is successful.

Conclusion

By now you should have a good understanding of how to create bash aliases and function that will make your life on the command line easier and more productive.

If you have any question or feedback feel free to leave a comment.

Source

Germany Dedicated Server